Harini_Jagannathan 270004KU7K Marcações:  tivoli-monitoring cognos reporting 3.635 Visualizações
Suppose you want to restrict users or clients to access only a subset of the virtual machines in your virtualized environment. This restriction can be imposed at the report level for the ITMfVE reports using Tivoli Common Reporting and Cognos.[Continue Reading]
cynthyap 110000GC4C Marcações:  image_management cloud provisioning usage monitoring virtualization cloud_computing cloud-computing orchestration 2 Comentários 28.878 Visualizações
Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.
But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”
The need to orchestrate really becomes clear when various aspects of cloud management are brought together. The value to the organization at this stage of cloud is simplifying the management of automation – otherwise a balancing act to manage multiple hypervisors, resource usage, availability, scalability, performance and more -- based on business needs from the cloud, with the ultimate goal of delivering services faster.
With orchestration, the pieces are woven together and can be managed more effectively to ensure smooth and rapid service delivery -- and delivered in a user-friendly catalog of services easily accessible through a single pane of glass. In essence, cloud orchestration = automation + integration + best practices.
Without cloud orchestration, it’s difficult to realize the full benefits of cloud computing. The stitching together of best practices and automated tasks and processes becomes essential to optimize a wide spectrum of workloads types.
IBM SmartCloud Cost Management now provides usage metering and reporting for IBM SmartCloud Provisioning (SCP). This is now available for download on the ISM Library here: http://www.ibm.com/software/ismlibrary?NavCode=1TW10UM08
This new capability allows you to collect usage information from SCP environments using the SCP High scale low touch (HSLT) commands. The new HSLT SCP Collector gathers usage data every hour and processes it once a day. Usage, Detail and Identifier data is stored on a daily basis. The usage data is then billed, stored and can be reported on on a monthly basis.
A sample job file is provided as part of this functionality to show how to bill each access-id for the high-water mark of allocated resources in the month. The sample job file, SampleHSLT_SCP.xml is divided into three separate jobs.
The first job, SCP_collect_HSLT_hourly_data is recommended to be run every hour at XX:59. This job will run HSLT commands to collect all relevant resources for each access id that is using the SmartCloud Provisioning Service. Firstly, a list of all available access ids is collected using the command iaas-describe-accesses-by-user.
Then, for each access id, the command iaas-describe-resources-inuse-by-access is run to collect the relevant resources for that access id. The resources gathered per access id include:
Memory (MB) , Volume (GB), Number of Virtual Processors, Number of VM Instances, and Number of static IP Addresses.
The HSLT commands also provide context information that feeds into the Account Code Structure. The Account Code Structure includes the following identifiers:
The second job, SCP_Process_daily_data is recommended to be run every day some time after midnight. This job will process the daily CSR file and extract the maximum value across the day for each resource for each access id. The resource values are then stored in the cimsresourceutilization table of the SmartCloud Cost Management database. Detail and Identifier data is stored in the cimsdetail and cimsident tables of the SmartCloud Cost Management database.
The third job, SCP_Process_monthly_data is recommended to be run once a month at the start of the month. It will process the last months worth of data from the cimsresourceutilization table. It will do this by extracting the maximum value for each resource for each access id. Billing is applied to the data using the relevant SmartCloud Cost Management rate codes and the processed data is then stored in the cimssummary table of the SmartCloud Cost Management database, allowing reports to be run on the data.
The sample jobs can be customized for other charging algorithms if desired. Examples include charging on a daily (or hourly) basis in addition to or instead of on a monthly basis. Tiered pricing logic can be applied to provide for having charging amnesty for users/departments that stay below a certain threshold.
Rates are defined for each resource. These rates are used for billing purposes.
Additions have also been made to the existing SmartCloud Cost Management KVM collector to include new resources and a separate job file has been included to to add some SCP context data to the Account Code Structure, achieved by running HSLT commands.
For information about the existing TUAM KVM collector refer to the following link in the TUAM 7.3 Information Center:
The new resources for the KVM Collector include Bytes Received, Packets Received, Receive Packets dropped, Receive Packet errors, Bytes Transferred, Packtes Transferred, Transfer Packets dropped, Transfer Packet errors, Log Size of VM Image, Size of VM Image on Disk.
The new Account Code Structure for the KVM Collector contains the following identifiers: Service Region, Group, Username, Access id, VM Name
The VM Name contains the Access id allowing the information collected from the Hypervisor to be related back to the SmartCloud Provisioning identifiers.
The following reports are sample reports run on a system that has collected data from one Service Region on a SmartCloud Provisioning System:
Top 10 Pie Chart
Invoice By Account Level
Note also that other existing SmartCloud Cost Management collectors can collect information from VMWare and Power hypervisors.
See the Information Centre (http://pic.dhe.ibm.com/infocenter/tivihelp/v3r1/topic/com.ibm.ituam.doc_7.3/admin_win_dc/c_core_data_collectors.html) for details.
If you have any questions about this functionality, please contact John Buckley (John Buckley/Ireland/IBM) or Louise O'Halloran (Louise O'Halloran/Ireland/IBM).
cynthyap 110000GC4C Marcações:  virtualization provider cloud csp cloud-computing cloud_computing msp service provisioning 8.365 Visualizações
With the proliferation of cloud computing, many businesses are starting to adopt a service provider model—either as a deliberate strategy to establish new revenue streams or, in some cases, inadvertently to support the growing needs of their organizations. This is especially true for companies with diverse needs, whether they’re tech companies with dev teams churning out new apps and services, or business owners driving requirements for SaaS services and cloud capabilities to enhance their data center operations.
In any event, the distinction between managed service providers (MSP) or cloud service providers (CSP), and companies growing in-house capabilities may not be as important as the common need to respond quickly and scale to support customer needs. The challenges facing all of these companies include facilitating the creation of new applications and services while maintaining quality of service, and the need for automation to reduce human resources and error from manual tasks—all with an eye to drive revenue and acquire new customers.
And so, the challenge for service providers of any kind is to increase scalability, automation and uptime while constraining costs. Companies are increasingly solving the critical piece of this puzzle by embracing rapid, high-scale provisioning and key cloud management capabilities to allow them to grow as quickly as their customers’ needs. In particular, the benefits accrue in four key areas.
First, applications can be deployed rapidly across private and public cloud resources.
Second, rich image management tools simplify complex and time consuming processes for creating virtual images and constraining image sprawl.
Third, operational costs can be lowered by leveraging existing hardware to support an array of virtual servers and diverse hypervisors.
And fourth, high-scale provisioning enables rapid response to changing business needs with near-instant deployment of hundreds of virtual machines.
While the spectrum of virtualization to orchestration functionality helps to manage their environments, high-scale provisioning in particular offers a cost-effective way to leverage capacity as a business commodity—a way for service providers to offer seemingly limitless capacity to their customers while lowering the relative costs of providing it.
In the case of Dutch Cloud, a CSP based in the Netherlands, a growing client base allowed the company to expand but it was very conscious of the costs and issues related to scalability, performance and security. By adopting a lightweight, high-scale provisioning solution for core service delivery, Dutch Cloud added capacity easily and was able to scale up rapidly without interruption to customer service. The CSP also reduced its administrative workload by 70 percent by adopting automation best practices. Monthly revenue has tripled twice in the last six months without an increase in operational costs.
Other service providers such as SLTN, a systems integrator serving large and mid-sized businesses, have experienced similar cost savings by extending platform managed services to a cloud delivery model. By implementing a low-touch, highly scalable cloud as its core delivery platform across multiple compute and storage nodes, SLTN was able to deploy new services in seconds rather than hours. It was also able to utilize existing commodity skills without significant training, integrate the existing mixed environment and minimize operational administration and maintenance. The underlying IaaS cloud capabilities allowed SLTN to be more efficient and to provide the full spectrum of cloud services to their own customers in a pay-as-you-go model—with better service and at a lower price point.
The benefits that these companies experienced are evidence that high-scale provisioning and cloud management capabilities can dramatically increase service capacity. For service providers of all stripes—whether deliberate or not—these benefits are a critical part of the evolution of cloud services and offer a meaningful way to deliver more value to themselves and their users.
SandraWeiss 060000BCJJ Marcações:  kvm virtualization vmware monitoring esxi health smartcloud_health solutions smartcloud provisioning cloud 1 Comentário 8.847 Visualizações
Service Health for IBM SmartCloud Provisioning has officially GA'ed and is now available on IBM Integrated Service Management Library ( ISML ).
Service Health provides pre-built integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring utilizing a custom agent, OS agents, and the ITMfVE agents. A product provided navigator offers a concise overview on the health of the IBM SmartCloud Provisioning infrastructure enabling the ability to identify and react to issues in your environment quickly minimizing the impact, such as an unresponsive compute node, high disk usage on storage nodes or key kernel services not responding. It also provides visibility into the KVM and ESXi hyper-visors.
This solution can be downloaded from the IBM Integrated Service Management Library( ISML ) following this link -> Service Health for IBM SmartCloud Provisioning
HopeRuiz 110000NU71 Marcações:  ibm_webcast ibm_cloud tivoli_user_community tuc 4.462 Visualizações
Please join the Tivoli User Community for a live Webinar and opportunity for questions, this coming Thursday, April 26, 2012 10:00 am, ET USA
TUC Webinar: Managing the Smarter Physical Infrastructure with the IBM SmartCloud Control Desk
Space is limited. Reserve your Webinar seat now at: http://tivoli-ug.org/events/community_webcasts/c/e/262.aspx
Overview: In this free webinar, the Tivoli User Community is given an exclusive opportunity to see a demonstration of this new SmartCloud offering and ask questions of the IBM product team. IBM SmartCloud Control Desk is a unique new offering that provides integrated management, and analytics-driven ITIL process automation. SmartCloud Control Desk provides a single platform—at a single price point—for managing incidents, problems, service requests, changes, configuration, releases, assets, procurement, service levels and licenses, and includes a service catalog. It is available in a wide range of delivery models, including traditional install, Software-as-a-service, and virtual machine images.
This webinar will cover some of the innovative new features in Control Desk that allow it to "automate ITIL at cloud speed" and extend the management to smarter physical infrastructures.Learn More
About The Speakers:
Spend the day in the cloud! This webinar is the hour following the presentation and demo session for IBM SmartCloud Provisioning Beta
The Official Tivoli User Community is the largest online and offline organization of Tivoli professionals in the world – home to over 160 local User Communities and dozens of virtual/global groups from 29 countries – with more than 26,000 members. The TUC community offers Users blogs and forums for discussion and collaboration, access to the latest whitepapers, webinars, presentations and research for Users, by Users and the latest information on Tivoli products. The Tivoli User Community offers the opportunity to learn and collaborate on the latest topics and issues that matter most. Membership is complimentary. Join NOW!
rossella 120000Q98F Marcações:  smartcloud beta isaac demo provisioning cloud 6.385 Visualizações
A presentation and demo session for IBM SmartCloud Provisioning held on thursday, April 26th at 3:00 PM central european time (CET)
The presentation will be around architectural changes in IBM SmartCloud Provisioning.
The demo will be about registering High Scale Low Touch as cloud group in IBM SmartCloud Provisioning.
No password is required
rossella 120000Q98F Marcações:  isaac smartcloud beta provisioning download 4.877 Visualizações
A new beta drop for IBM SmartCloud Provisioning is available.
Here below you can see the list of key functionalities included:
If you would like to try out the new features without the effort of installing the product, join the community and play with our hosted beta
If you would like to download the code, go here
One of the messages behind cloud computing is "pay-per-use": the adoption of a virtualized, standardized, self service and automated environment should come with the possibility to be charged only for the used resources.
IBM SmartCloud Provisoning comes with an idea of low complexity, low administration and ease of use.
Keeping these messages in mind, I was thinking at how to extract metering information. I had in mind something easy, doable also by people who definitely do not want to invest in programming, that does not need any modification to database tables to store historical data.
So I had a look at the available IBM SmartCloud Provisioning interfaces and I just found a couple of command line commands that could help me achieving my goal:
iaas-describe-resources-inuse-by-access and iaas-describe-accesses-by-user
The first command displays the number of images, cores and the amount of memory and disk space in use by a specific access ID. So this commands shows the key measures that in cloud computing are usually taken into consideration for usage and accounting.
The second command shows the relationship between access IDs and user IDs. This mapping helps in building metering information per user and not per access ID. In a simple environment the map is 1-1, but for example you may have the same user accessing more VM regions and so having multiple access IDs associated.
Given these two command, it is pretty straightforward to setup a couple of cronjobs/periodic tasks (depending if you would like to do it on Linux or on Windows) that with a predefined sequence ( for example once a hour) will extract this data and store it in a temporary file.
You can then have a another cronjob/periodic task that sums up all these information daily, per user, maybe adding your specific rate codes. If you choose to store this data for example in a CSV file, then you can easily import it into a reporting engine.
Antonio_Di_Cocco 060001977Q Marcações:  virtual_image_consolidati... virtual_image virtual_image_library image_consolidation image-library 4.650 Visualizações
Using IBM SmartCloud Provisioning end users can easily create and use new virtual machines without taking care about how they are running and where they came from. Users just pick up an image to be deployed from a catalog and run it. These means that in order to fulfill all requests the catalog should be as wide as possible. Theoretically it should contains all possible combination of Operative System plus software. Meaning that cloud administrators must manage thousands and thousands of image. Number of base images increase quickly improving management costs, which may lead to a much more expensive solution rather than a cost saver infrastructure. Looking to INTERNET, there are several suggestion to consolidate your image catalog to be as smaller as possible.
Best way to create and to maintain a small image catalog is to create few standard configuration depending on the user's job role. For example a developer could require an Ubuntu systems with Lotus Notes and Rational Software Architect installed on it, while a tester may need a Windows system with TEM agent, DB2 and some other middleware to run his test scenario. In this case we can define a standard so that any user can ask for end usage driven Virtual Image deployment. She/he will require to deploy an image not selecting it by its content, but based on its job.
Even if it is a good suggestion, it could not be easy to implement. In fact if you are creating your cloud solution from scratch you can force end user to select in a small catalog the best image fitting their requirement. But if you already have your cloud environment up a running for a while and your image catalog is already out of control, its consolidation could not be an easy job. Cloud administrators should open all virtual images to look into them, understanding their content. And than decide which is the most representative, making them master templates. Just think to do this job for thousands of image.
Luckily IBM Virtual Image Library will help you in this work.
One of key features for IBM Virtual Image Library is the capacity to introspect virtual image understanding their content and allowing loud administrators to compare between them. In this way they will be able to understand how much two images are similar. When you register an operation repository to IBM Virtual Image Library, a discovery process starts. During this phase information about all images and virtual machines are retrieved from remote repository. At this time only meta data about remote objects are stored locally. Moreover virtual images are indexed, reading their contents remotely.
Once registration has finished you have possibility to start working with your catalog. Most useful operation are:
As you can imagine having a so powerful tool it will be an easy job to define your master images (defining standard configurations). And to consolidate them by merging similar images reducing them to a smaller and manageable set.
Core function for previous feature is IBM Virtual Image Library capability to introspect remote images. There are two types of analysis available:
In both cases IBM Virtual Image Library will not copy any image locally, but simply connect to remote hypervisor data store to read image disk. Reducing time and network traffic spent during this operation.
IBM Virtual Image Library allows you to introspect remote images to consolidate their number removing the unnecessary one. One you consolidated them you can also import the smallest catalog into IBM Virtual Image Library reference repository allowing you to move across hypervisors. As mentioned in Image portability across hypervisors
AHUP_Gianluca_Bernardini 120000AHUP Marcações:  scp cloud eip ebs virtualization 5.409 Visualizações
In a dynamic cloud environment standard concepts like IP addresses and storage volumes assume a special meaning when it comes to reserving and using them regardless of the virtual machines owned by a cloud user.
The concept of Elastic IP (EIP) and Elastic Block Storage (EBS) was initially introduced by Amazon EC2 as a way to decouple the resources assigned to a cloud user from their utilization. In other words, as a cloud user you can reserve an elastic resource and assign it to one of the VMs you own, but you can also re-assign it to a different VM whenever you need (for example, whenever you need to replace your VM with a new one).
SmartCloud Provisioning offers similar capabilities exposing the concepts of Static Addresses and Persistent Volumes that can be reserved and assigned to any running VMs.
A SmartCloud Provisioning address is a statically defined address which can be dynamically bound to any instance in the cloud. In other words, a static IP address is associated with your account, not with a particular instance, and you control that address until you choose to explicitly release it.
Let’s examine more in details how it works.
When SmartCloud Provisioning creates a VM, it assigns a dynamic IP address to it, on a default management sub-network. From this point on, the system always refers to the VM using the dynamic address assigned at boot time. Nonetheless, SmartCloud Provisioning offers to cloud users the possibility of assigning a different IP address, which can be seen as a reserved and static IP.
In order to achieve this result, a centralized pool of addresses is registered by the cloud administrator and stored in a durable data service. A cloud user can then reserve one or more addresses from this pool, and can associate one of them to a specific VM he owns. Note that the cloud user does not have any clue about which address will be reserved for him; he does not even know upfront if there is any static IP address left, until he sends the reservation request.
Once a static IP has been reserved and assigned to a VM, SmartCloud Provisioning internally creates a mapping between the default dynamic address associated to the selected VM and the reserved IP address. This translates into NAT rules on the host OS's iptables to forward all traffic to the private address of that VM.
In this way you can always refer to your VM using the static address, and even if you decide to re-create the VM, you can reassign that same address to the new VM.
The address remains in your reserved list as long as you need it, and you can release it when you no longer need it.
Persistent storage is critical to any non-trivial production application. Just as Amazon's EBS has proven to be extremely valuable, SmartCloud Provisioning persistent volumes are equally powerful, offering an off-instance storage that persists independently from the life of an instance. Users can create arbitrary numbers of arbitrarily sized persistent volumes. The volumes can be dynamically attached to any VM on the cloud as long as only one instance is attached at any time.
Once attached, a persistent volume appears to the guest OS like any other raw, unformatted block device.
Each persistent volume is assigned a UUID, which can be leveraged by the cloud user to track them.
RAID sets can be easily created together to ensure each volume is hosed on a separate physical host/device.
Multiple block devices will then be exposed to the guest OS which can establish their own raided meta-devices using tools like mdadm.
Behind the scenes, these block devices are very similar to the primary boot disk of a non persistent VM. However, these are read-write iSCSI devices and directly attached to the instance without leveraging Copy-on-Write. Note persistent block storage is also hosted on the same storage cluster used for master images.
Similarly to the static IP addresses, the persistent volumes are associated with your account, not with a particular instance, and you control them until you choose to explicitly delete them.
The persistent volumes allow you to keep your data separate from the OS, offering you the possibility to move them from a VM to another whenever you need. Moreover, they offer a valid mechanism to keep your data safe when dealing with VMs that do not have a dedicated persistent storage (the non-persistent VMs managed by SmartCloud Provisioning).
If you're interested in trying the SmartCloud Provisioning product, you can download a trial version from the following link:
Harini_Jagannathan 270004KU7K 4.294 Visualizações
When you modify an existing out-of-the-box report, how do you replace IBM logo with your company logo? It’s as simple as copying your image in a couple of locations and pointing an image URL within Report Studio to your image. Let’s see how you can do this with TCR 2.1.
For TCR 2.1, images need to be stored in 2 locations:
<TIP_HOME>\profiles\TIPProfile\installedApps\TIPCell\IBM Cognos 8.ear\p2pd.war\tivoli\<product>\images\
The first directory listed above is used for HTML reports displayed within TCR. The second directory is used for Excel, PDF, and emailed HTML. [Continue Reading]
We're pleased to make available as a beta Service Health for IBM SmartCloud Provisioning. As this is a beta we welcome any and all feedback.
Service Health (Beta) for IBM® SmartCloud Provisioning provides prebuilt integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring. This solution allows you to easily monitor your IBM SmartCloud Provisioning infrastructure to identify and react to issues in your environment.
This solution is available via the IBM Integrated Service Management Library( ISML ). You can find it here -> Service Health for IBM SmartCloud Provisioning. Please use the "Comment or Review" link on that page to post feedback. You may also use the "Contact Provider" link as well.
AHUP_Gianluca_Bernardini 120000AHUP Marcações:  provisioning cloud bot decentralized smartcloud scale-out hslt p2p 6.410 Visualizações
SmartCloud Provisioning is designed to minimize the use of a centralized “command and control” approach, in favor of scale out management, where endpoints can participate in management activities and do not depend on a single configuration management database.
This allows SmartCloud Provisioning to handle multiple provisioning tasks in parallel, across an unlimited number of servers.
Cloud users can request deployments of virtual machines and have access to the provisioned systems in very few seconds, thanks to the parallel and distributed processing that happens transparently and under the covers.
Let’s drill down into the details about this distributed management approach.
SmartCloud Provisioning internally uses a peer to peer (P2P) messaging infrastructure to pass provisioning and management messages between agents, which contribute to the decentralized control.
Agents are installed on the compute nodes (i.e. the hypervisors) as well as on the storage nodes, where images and volumes reside.
The P2P connections between agents not only allow self-monitoring of their health in order to implement a low-touch management infrastructure, but also allow orchestrating the communications to achieve an effective load distribution and decentralized management of the requests performed by cloud users.
The P2P communication overlay is backed by a distributed lock service, which is based on ZooKeeper.
ZooKeeper is a distributed, open-source coordination service for distributed applications, which exposes a simple set of primitives that distributed applications can build upon to implement higher level services for synchronization, configuration maintenance, and groups and naming. It is designed to be easy to program, and uses a data model styled after the familiar directory tree structure of file systems.
Like the distributed processes it coordinates, ZooKeeper itself is intended to be replicated over a set of servers that must all know about each other. They maintain an in-memory image of state, along with a transaction logs and snapshots in a persistent store.
SmartCloud Provisioning agents connect to a single ZooKeeper server. Each agent maintains a TCP connection with the Zookeeper server, through which it sends requests, gets responses, gets watch events, and sends heart beats. If the TCP connection to the server breaks, the agent will connect to a different server.
When a deployment request is received by SmartCloud Provisioning, the request is processed by the Web Services layer, passed to the management infrastructure, and managed by the agents and the ZooKeeper services.
The following steps describe in more details the internal communications, as depicted in figure 1 below.
This processing happens in a transparent way for the end user, who just sees the deployment request being served in few seconds.
As I said, this processing happens under the covers in a very fast way and the user does not have to worry about any of the steps above.
This allows reaching high levels of parallelism, decentralized management, as well as scale-out capabilities that can be easily reached by increasing the number of servers.
If you're interested in trying the SmartCloud Provisioning distributed management capabilities, you can download a trial version from the following link:
Jacques.Fontignie 270000UHFU Marcações:  provenance versioning image-library virtualization cloud image library 5.356 Visualizações
Cloud systems have made a huge
improvement in terms of tracking and performance. In “Rapid deployments with
IBM Smart Cloud Provisioning” blog, we have shown that virtual machines or
appliances can be started and configured in a matter of seconds. It has never
been so easy to create a virtual machine (VM), install software, and configure
middleware. However, with great power comes great responsibility… it is now
possible to create a VM, but what is its lifecycle? Will it be destroyed after
being used, is the starting image deprecated, or is there a better starting
image given the needed configuration and software install requirements?
IBM SmartCloud Provisioning provides a component called IBM Virtual Image Library (also known as IVIL) to solve common issues that arise in large scale virtualized environments:
VIL can be integrated simply into
your virtualization infrastructure; the only requirement to start using IVIL is
the credentials required to contact the virtualization infrastructure. No changes to your current virtualization
environment are required. After credentials
are provided, IVIL can automatically determine the provenance, state, and the
content of each virtual image or virtual machine in the virtualization
environment. After the environment is
registered you will have a clear picture
of your various images, their content, history, and similarity with one
another. More important, as soon as IVIL
is used in the infrastructure, it can be used to move the images from one
hypervisor vendor to another and keep track of these migrations. To summarize, IVIL
not only keeps track of the changes of an image on one hypervisor but continues
when images are in a heterogeneous environment.
A common solution to track the contents and versioning of images is by use of a naming convention, for example, a name such as RHEL_6.1_WebSphere7.1_v2.1 implies the image is Red Hat Linux 6.1 with WebSphere 7.1 installed, and that this is version 2.1 of this image. It is feasible to use this approach with a small number of images but becomes cumbersome and confusing with anything but small examples. Basic information that is typically attempted to be conveyed includes:
Using an image naming convention can work in some cases and provide some of the needed information but it does not scale beyond a small number of simple images. To solve this, IVIL provides versioning and provenance control to understand where an image comes from:
What is provenance? Simply put provenance tracks the history of the image as it has evolved over time in the virtual environment. It tracks how the bits that make up the image came to be – through IVIL checkout operations, image clone operations, image copy operations, and so on. It is used to understand the lineage of an image from the perspective of the virtual system which might or might not match with how the user of IVIL views the image.
For example, let’s assume that you
have an image called “A”. If you decide to start this image on multiple
instances of IBM SmartCloud Provisioning or if you decide to clone this image
possibly multiple times, then IVIL will keep track of the relation between all
the created images and instances. At any time, if a security flaw is found on
A, then you can infer that the associated images and instances are likely affected
also. IVIL provides this functionality not only for a single virtual
environment, but across heterogeneous virtual environments also.
What is versioning? Versioning is the logical user-defined lineage of an image or virtual appliance; it is the way a user would think of versioning his or her image functionality, for example this is version 2 of my AccountsPayableService virtual image. When an image is available with a particular application version, the OS and libraries behind are often not important, only the application is. Is it important to know its template? Not necessarily, only the information about the OS is relevant. However, it is good to know the application version and if there is a newer version available for this image or if a new image has been released with the latest security patches. This is the versioning system in IVIL; it helps to understand if there are other versions of the application in the infrastructure, if some applications contain a patch or not.
To summarize, provenance is
oriented to infrastructure administration whereas versioning is more oriented
towards applications and workloads.
For example, let’s assume that we want to provide version 1.0 of software S as image. By default, users can decide to use software S and trigger any instance of image A. At a certain point, the version 1.0 is deprecated and we must upgrade software S to version 1.1. Unfortunately, the OS distribution must be upgraded. A solution is to reinstall the OS from scratch and install S version 1.1 on it; this new image will be called B. These images do not have any common lineage from a provenance perspective, however the content has a logical lineage to the user. Image A is the parent of image B from a versioning perspective.
It is important to understand that
an image can have only one provenance parent but can have multiple version
parents. The second claim makes sense because an image may have multiple
applications installed and thus each one may be associated to a logical
This concludes the introduction of Virtual Image Library component in IBM SmartCloud Provisioning. Next time, I will introduce the concept of similarity between images and the power that it provides in terms of debugging, infrastructure consolidation, licensing cost, and more.