The solution Endpoint security for SmartCloud Provisioning v2.1
has been published on IBM Integrated Service Management Library (ISML).
The purpose of Endpoint security for SmartCloud Provisioning v2.1 is to demonstrate how IBM Tivoli Endpoint Manager can be integrated with the IBM SmartCloud Provisioning Infrastructure.
Endpoint security for SmartCloud Provisioning will generate the components required by IBM SmartCloud Provisioning 2.1 to automatically install IBM Tivoli Endpoint Manager agents when deploying virtual systems. This will allow cloud administrators to easily maintain compliance over their virtualized network. IBM SmartCloud Provisioning v2.1
as well as IBM Tivoli Endpoint Manager v8.2
need to be available. If you are participating in the IBM SmartCloud Provisioning v2.1 beta and have IBM Tivoli Endpoint Manager, consider using Endpoint security as well.
We know that cloud computing offers a myriad of benefits like rapid service delivery and lower operating costs. But it can also lead to challenges in data governance, access control, activity monitoring and visibility of dynamic resources—in essence, all aspects of IT security.
The IT organization must have the capabilities to both deliver services more quickly to meet the demands of the business and be able to provide high levels of security and compliance. In the past the delivery of the services was typically the bottleneck in providing new services, but now with automated cloud and self service delivery models the teams responsible for change management and security can quickly become the bottleneck due to manual processes and siloed tools.
For example, organizations need the ability to patch all of their systems, both physical and virtual, whether distributed or part of a cloud. Operations teams need better insight into and control of deployed virtual systems, including OS patch levels, installed middleware applications and related security configurations. And there can be too many security exposures with offline and suspended VM’s that haven’t been patched in weeks or months.
A holistic approach is needed that addresses rapid provisioning of services and automation of key security and compliance requirements. Together these capabilities can keep you in control of rapidly changing cloud environments. First let’s look at the capabilities needed in a cloud provisioning solution.
Cloud provisioning should combine application and image provisioning for workload optimized clouds and deliver:
· Reduced costs with automated high-scale provisioning; multiple hypervisor options and HW of choice
· Accelerated time-to-market with standardized pattern-based deployment for workload optimized cloud
· Image sprawl prevention with in-built advanced image lifecycle management capabilities
· Ease of adoption and clear roadmap to move to advanced cloud capabilities
Second, a unified endpoint management approach is required to provide visibility and control of your systems, regardless of context, location or connectivity, and needs to deliver:
· Heterogeneous platform support with seamless patch management for multiple operating systems, including Microsoft Windows, Unix, Linux and Mac OS, as well as hypervisor platforms
· Automatic assessment and “single click” remediation, which shortens time to compliance by automatically identifying necessary patches and enabling users to target and remediate endpoints quickly
· Enterprise-class scalability and security to provide proven scalability, including fine-grained authorization and access control capabilities
Explore these capabilities with the new IBM SmartCloud Patch Management.
Service Health for IBM SmartCloud Provisioning has officially GA'ed and is now available on IBM Integrated Service Management Library ( ISML ).
Service Health provides pre-built integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring utilizing a custom agent, OS agents, and the ITMfVE agents. A product provided navigator offers a concise overview on the health of the IBM SmartCloud Provisioning infrastructure enabling the ability to identify and react to issues in your environment quickly minimizing the impact, such as an unresponsive compute node, high disk usage on storage nodes or key kernel services not responding. It also provides visibility into the KVM and ESXi hyper-visors.
This solution can be downloaded from the IBM Integrated Service Management Library( ISML ) following this link -> Service Health for IBM SmartCloud Provisioning
In a dynamic cloud environment standard concepts like IP addresses and storage volumes assume a special meaning when it comes to reserving and using them regardless of the virtual machines owned by a cloud user.
The concept of Elastic IP (EIP) and Elastic Block Storage (EBS) was initially introduced by Amazon EC2 as a way to decouple the resources assigned to a cloud user from their utilization. In other words, as a cloud user you can reserve an elastic resource and assign it to one of the VMs you own, but you can also re-assign it to a different VM whenever you need (for example, whenever you need to replace your VM with a new one).
SmartCloud Provisioning offers similar capabilities exposing the concepts of Static Addresses and Persistent Volumes that can be reserved and assigned to any running VMs.
A SmartCloud Provisioning address is a statically defined address which can be dynamically bound to any instance in the cloud. In other words, a static IP address is associated with your account, not with a particular instance, and you control that address until you choose to explicitly release it.
Let’s examine more in details how it works.
When SmartCloud Provisioning creates a VM, it assigns a dynamic IP address to it, on a default management sub-network. From this point on, the system always refers to the VM using the dynamic address assigned at boot time. Nonetheless, SmartCloud Provisioning offers to cloud users the possibility of assigning a different IP address, which can be seen as a reserved and static IP.
In order to achieve this result, a centralized pool of addresses is registered by the cloud administrator and stored in a durable data service. A cloud user can then reserve one or more addresses from this pool, and can associate one of them to a specific VM he owns. Note that the cloud user does not have any clue about which address will be reserved for him; he does not even know upfront if there is any static IP address left, until he sends the reservation request.
Once a static IP has been reserved and assigned to a VM, SmartCloud Provisioning internally creates a mapping between the default dynamic address associated to the selected VM and the reserved IP address. This translates into NAT rules on the host OS's iptables to forward all traffic to the private address of that VM.
In this way you can always refer to your VM using the static address, and even if you decide to re-create the VM, you can reassign that same address to the new VM.
The address remains in your reserved list as long as you need it, and you can release it when you no longer need it.
Persistent storage is critical to any non-trivial production application. Just as Amazon's EBS has proven to be extremely valuable, SmartCloud Provisioning persistent volumes are equally powerful, offering an off-instance storage that persists independently from the life of an instance. Users can create arbitrary numbers of arbitrarily sized persistent volumes. The volumes can be dynamically attached to any VM on the cloud as long as only one instance is attached at any time.
Once attached, a persistent volume appears to the guest OS like any other raw, unformatted block device.
Each persistent volume is assigned a UUID, which can be leveraged by the cloud user to track them.
RAID sets can be easily created together to ensure each volume is hosed on a separate physical host/device.
Multiple block devices will then be exposed to the guest OS which can establish their own raided meta-devices using tools like mdadm.
Behind the scenes, these block devices are very similar to the primary boot disk of a non persistent VM. However, these are read-write iSCSI devices and directly attached to the instance without leveraging Copy-on-Write. Note persistent block storage is also hosted on the same storage cluster used for master images.
Similarly to the static IP addresses, the persistent volumes are associated with your account, not with a particular instance, and you control them until you choose to explicitly delete them.
The persistent volumes allow you to keep your data separate from the OS, offering you the possibility to move them from a VM to another whenever you need. Moreover, they offer a valid mechanism to keep your data safe when dealing with VMs that do not have a dedicated persistent storage (the non-persistent VMs managed by SmartCloud Provisioning).
If you're interested in trying the SmartCloud Provisioning product, you can download a trial version from the following link:
I really liked the post “Rapid deployments with IBM
SmartCloud Provisioning” that explains
how simple and fast it is to deploy instances using SmartCloud Provisioning.
But if is also important to have a flexible way for passing parameters during
the deployment in order to configure and/or customize the deployed instances.
IBM SmartCloud Provisioning provides in the
launch instance panel, and also using the CLI, the “user_data” text field that
can be used for this scope.
inspired to the Amazon EC2 instance metadata and here you can find an interesting
article on it: http://alestic.com/2009/06/ec2-user-data-scripts
field is a free text field so for example it can contain:
- comma separated values for
- multi-part MIME format for
complex configurations, where each part, identified by a proper
content-type, is related to a specific customization.
launched instance can easily retrieve the user data field invoking the
predefined URL http://169.254.169.254/latest/user-data
and processes it according the needs.
It can be
achieved by exploiting the current integration between IBM
SmartCloud Provisioning and the Image Construction and Composition Tool
(ICCT), available in IBM SmartCloud
Provisioning version 1.2, creating a new bundle, the User-Data consumer bundle
that contains a script that retrieves the “user-data” and process it based on his
interesting scenario is the capability of passing directly one or more scripts
to be invoked at deployment time to have a really dynamic configuration. In
this way, a new image can be configured/customized at deployment time.
If you want
to have more information on user-data capabilities and examples take a look at
the Ubuntu cloud-init component described here https://help.ubuntu.com/community/CloudInit
information about IBM SmartCloud Provisioning and Image Construction and
Composition Tool see IBM SmartCloud Provisioning
Cloud systems have made a huge
improvement in terms of tracking and performance. In “Rapid deployments with
IBM Smart Cloud Provisioning” blog, we have shown that virtual machines or
appliances can be started and configured in a matter of seconds. It has never
been so easy to create a virtual machine (VM), install software, and configure
middleware. However, with great power comes great responsibility… it is now
possible to create a VM, but what is its lifecycle? Will it be destroyed after
being used, is the starting image deprecated, or is there a better starting
image given the needed configuration and software install requirements?
IBM SmartCloud Provisioning provides
a component called IBM Virtual Image Library (also known as IVIL) to solve
common issues that arise in large scale virtualized environments:
tracking: Where are my images? How old are they? How are they related?
and security: What is in the images? Are they secure? What is the software
Are there images redundancies? Is
there any difference between two images?
list goes on
VIL can be integrated simply into
your virtualization infrastructure; the only requirement to start using IVIL is
the credentials required to contact the virtualization infrastructure. No changes to your current virtualization
environment are required. After credentials
are provided, IVIL can automatically determine the provenance, state, and the
content of each virtual image or virtual machine in the virtualization
environment. After the environment is
registered you will have a clear picture
of your various images, their content, history, and similarity with one
another. More important, as soon as IVIL
is used in the infrastructure, it can be used to move the images from one
hypervisor vendor to another and keep track of these migrations. To summarize, IVIL
not only keeps track of the changes of an image on one hypervisor but continues
when images are in a heterogeneous environment.
A common solution to track the
contents and versioning of images is by use of a naming convention, for
example, a name such as RHEL_6.1_WebSphere7.1_v2.1 implies the image is Red Hat
Linux 6.1 with WebSphere 7.1 installed, and that this is version 2.1 of this
image. It is feasible to use this approach with a small number of images but
becomes cumbersome and confusing with anything but small examples. Basic information that is typically attempted
to be conveyed includes:
is the OS and OS version?
applications are installed and their versions?
the latest patches and updates installed?
does this image relate to other versions of the same or similar images?
Using an image naming convention
can work in some cases and provide some of the needed information but it does
not scale beyond a small number of simple images. To solve this, IVIL provides versioning and
provenance control to understand where an image comes from:
What is provenance? Simply put provenance tracks the history of the
image as it has evolved over time in the virtual environment. It tracks how the bits that make up the image
came to be – through IVIL checkout operations, image clone operations, image
copy operations, and so on. It is used
to understand the lineage of an image from the perspective of the virtual
system which might or might not match with how the user of IVIL views the
For example, let’s assume that you
have an image called “A”. If you decide to start this image on multiple
instances of IBM SmartCloud Provisioning or if you decide to clone this image
possibly multiple times, then IVIL will keep track of the relation between all
the created images and instances. At any time, if a security flaw is found on
A, then you can infer that the associated images and instances are likely affected
also. IVIL provides this functionality not only for a single virtual
environment, but across heterogeneous virtual environments also.
What is versioning? Versioning is the logical user-defined
lineage of an image or virtual appliance; it is the way a user would think of
versioning his or her image functionality, for example this is version 2 of my
AccountsPayableService virtual image.
When an image is available with a particular application version, the OS
and libraries behind are often not important, only the application is. Is it
important to know its template? Not necessarily, only the information about the
OS is relevant. However, it is good to know the application version and if
there is a newer version available for this image or if a new image has been
released with the latest security patches. This is the versioning system in IVIL;
it helps to understand if there are other versions of the application in the
infrastructure, if some applications contain a patch or not.
To summarize, provenance is
oriented to infrastructure administration whereas versioning is more oriented
towards applications and workloads.
For example, let’s assume that we
want to provide version 1.0 of software S as image. By default, users can
decide to use software S and trigger any instance of image A. At a certain
point, the version 1.0 is deprecated and we must upgrade software S to version
1.1. Unfortunately, the OS distribution must be upgraded. A solution is to
reinstall the OS from scratch and install S version 1.1 on it; this new image
will be called B. These images do not have any common lineage from a provenance
perspective, however the content has a logical lineage to the user. Image A is
the parent of image B from a versioning perspective.
It is important to understand that
an image can have only one provenance parent but can have multiple version
parents. The second claim makes sense because an image may have multiple
applications installed and thus each one may be associated to a logical
This concludes the introduction of
Virtual Image Library component in IBM SmartCloud Provisioning. Next time, I
will introduce the concept of similarity between images and the power that it
provides in terms of debugging, infrastructure consolidation, licensing cost, and
As customers consolidate and virtualize application workloads along their journey toward Cloud, the cost savings that they had envisioned often prove elusive. True efficiency comes from the ability to right-size both the environment and the virtual workloads - in response to actual performance data, rather than theoretical estimates – in order to create an optimized Cloud infrastructure that runs densely enough to provide true consolidation while maintaining application service levels and room for expansion. The migration to a Cloud infrastructure, where the physical resources that we're accustomed to monitoring have been "abstracted" into pools of virtual resources, presents us with a visibility problem. It's more difficult to tweak the knobs and turn the dials to make an individual server respond to our management needs. More importantly, any changes we make at the Cloud infrastructure level have the potential to dramatically affect other workloads and services.
Join us on February 16, 2012 for Simplify Cloud Management with IBM SmartCloud Monitoring, where Ben Stern will demonstrate how our latest infrastructure management offering can help a Cloud or virtualization administrator overcome those visibility hurdles, leveraging infrastructure monitoring, health dashboards performance and capacity analytics, and policy-driven optimization of workloads and their placement in the Cloud. Most customers want a Cloud monitoring product that can be plugged into their existing data center monitoring toolset, as part of an enterprise-proven, heterogeneous solution, providing continuity of historical data and preservation of skills. You'll hear how SmartCloud Monitoring has descended from the same IBM Tivoli Monitoring DNA running in the data centers of the world's largest corporations, and quickly discover that you already know more about SmartCloud Monitoring than you realized.
Ben Stern has spent over 20 years working in the IT industry in a variety of management and technical roles within the software development organization. Prior to his current role, he was the lead for the Tivoli Service Availability and Performance Management Best Practices team. In that role, he helped define best practices for the Tivoli portfolio while working with hundreds of customers around the world. In his current role, he is focusing on Tivoli's virtualization and cloud solutions.
Link to Register
Select the session that fits your schedule.
February 16th 2011 11:00 AM to Noon EST US and Canada (GMT -05:00)https://de202.centra.com:443/Reg/main/000000605ae4440134e542dc87007e8e/en_US
February 16th 2011 6:00 PM to 7:00 PM EST US and Canada (GMT -05:00)
With December's release of IBM SmartCloud Monitoring, Tivoli's venerable IBM Tivoli Monitoring product family, proven in data centers at the world's largest corporations, begins to adopt a "Cloud" posture. Sure, "Cloud" is a term bereft of a clear operational definition that we can apply at any given moment, and customers, analysts and vendors tend to bandy it about pretty freely these days. However, if we don't get too hung up what Cloud is or isn't, we can probably agree that it represents a migration from our traditional server-delivered infrastructure to one comprised of pooled computing resources shared by virtual workloads. Whether or not our customers are calling their virtualized environments "private clouds" today, and whether or not they've got a "cloud budget" that they're using for such initiatives, the fact that they're moving along the cloud maturity continuum at some pace seems inescapable, given IDC's assertion that we crossed the magical "50%" boundary last year, when half of all corporate workloads were running on virtual machines instead of physical ones.
If we're beginning to think in terms of clouds of pooled computing resources, it makes sense that we begin to deliver management solutions in the same way, right? If the server administrators, storage administrators and network administrators now report to a cloud administrator, we should begin to package solutions for those cloud administrators, combining multiple pieces of management technology into a single part number that customers can purchase and deploy. That's exactly what we've done with SmartCloud Monitoring. The discrete monitoring agents that are at the heart of IBM Tivoli Monitoring; OS monitors, application monitors, storage, etc., are as important as they ever were. Even though we're pooling those resources across virtual machines, we still have to monitor things like processes, CPU activity, IO throughput, and so on. We just need to add a layer on top of all that granular detail, so the cloud administrator can see, at a glance, what's healthy or unhealthy about his cloud environment, before drilling down into the nuts and bolts.
SmartCloud monitoring combines the VMware virtualization management features in ITM for Virtual Environments with virtual machine instance monitoring from ITM's operating system agents, to monitor a cloud infrastructure and the workloads running on it.
Our roadmap looks like an analyst's cloud maturity ladder, adding features such as automated provisioning, usage and accounting integration, and more detailed network monitoring, so our solution will "mature" along with the market, and customers' needs. See if the challenges along this ladder look like things that you or your customer have faced on their cloud journey, or are grappling with now. It's important to note that Tivoli has solutions that can be applied to each step, and for each problem. What SmartCloud promises is a way to bring those solutions together into more consumable bundles, tightly integrated together, to make cloud management simple to purchase and simple to deploy.
SmartCloud Monitoring delivers key capabilities for optimizing and maintaining a private cloud, including:
- Health dashboards, to provide an instant, consolidated glimpse into cloud health
- Topology views of the key interrelated components of the cloud
- Reports on the health trends of cloud components and workloads, powered by Cognos
- What-If capacity planning scenariosPolicy-Based optimization to put workloads where they’ll perform best, not just where they’ll fit
- Performance Analytics for right-sizing of virtual machines
- Integration with industry-leading Tivoli service management portfolio
Modern Cloud infrastructures are built leveraging thousands of highly distributed servers, used to provide services directly to customers over the Internet. The service provider has two extremely important objectives, which, unfortunately, are to some degree contrasting: a) ensure continuous availability of the Cloud service, and b) contain the cost of the infrastructure and administration (CAPEX and OPEX).
There are several factors that have an impact on the availability of services, mostly related to infrastructure failures. Failures are not only related to unrecoverable hardware outages, but also to recoverable OS or middleware failures.
Not so long ago, the most common approach to high availability was to assume one could deploy infrastructures with the highest Mean Time To Failure (MTTF) possible, which required expensive systems and assumed the possibility to write error-safe software applications. It was also assumed that some degree of down-time was acceptable, with vendors boasting of the number of 9's that they could support (e.g. 99.999% availability). In today's always-on Internet, any downtime of major services becomes headline news. The traditional approach is no longer applicable, and a new approach has to be considered.
Given the requirement to reduce infrastructure costs, service providers are using commodity hardware. Given also the requirement to reduce operational costs, hardware failures are commonly dealt with by directly replacing the failed component rather than manual debugging and recovery by skilled (and expensive) administrators. Thus, to maintain the objective of continuous availability of the service, the Cloud system must be built in order to expect failure of the underlying infrastructure, and not only for temporary periods but it must assume that components will disappear forever. This cannot be limited to only hardware components, as no matter how well a software element is tested, unexpected edge conditions will appear at some point-in-time. So, to guarantee continuous availability, a Cloud solution must also expect its own components to fail too.
Given that we are forced to expect failure, the high MTTF approach is no longer valid, and instead we have to increase availability by flipping the approach to minimizing Mean Time To Recovery (MTTR). The quicker the system can recover from failure, the higher the availability of the service will be. Given however that even a tiny percentage of downtime is no longer acceptable, we also need a means to maintain service availability during the recovery process. One way of doing this is through providing redundancy of all critical services within the Cloud solution.
SmartCloud Provisioning is designed according to the ROC principles, because it is based on a highly distributed, redundant and robust infrastructure, with near zero downtime, and automated recovery across heterogeneous platforms, and it does not require expensive systems, but can run on a relatively low-cost commodity infrastructure.
The key factors that allow SmartCloud Provisioning to be a low-touch and robust cloud infrastructure are the following:
the infrastructure is as stateless as possible: this avoids issues related to single points of failure
management agents are deployed on the physical nodes of the infrastructure (compute nodes and storage nodes) and are connected in a peer-to-peer network to form a self-monitoring and self-managing infrastructure
core services are redundant being deployed in clusters to tolerate individual faults
master images are replicated in multiple copies across the storage nodes in the storage cluster; this tolerates HW failures of the storage nodes in the cluster as well as network failures when accessing one copy of the image
hypervisor (compute) nodes are deployed via a stateless boot so that it becomes easier to re-deploy a failing hypervisor by simply rebooting it and getting a fresh new copy of the hypervisor image. This also allows easy deployment of new nodes if needed, to augment the capacity of the infrastructure
Let's consider some typical failure scenarios that can happen in a real environment and let's see how the SmartCloud Provisioning is designed to tolerate them and react appropriately.
First example is related to the management agents that are used by SmartCloud Provisioning to perform the standard provisioning operations.
Management agents are deployed on both the compute nodes and the storage nodes and are organized in dynamic hyerarchies, where a leader (manager) is dynamically elected. The leader is just the entry point for distributing the requests across the infrastructure and a coordinator of any operation, but this role does not imply any special information being associated with the agent itself (stateless infrastructure): any agent can be a leader.
All the agents have a watch-dog mechanism that is used to prevent, detect and correct failures; they also monitor each other in the neighborhood and can start simple actions to fix other agents issues.
So, if an agent fails, the watch-dog mechanism tries to restart it. If the watch-dog is not able to restart the agent, neighbours try some simple actions to restart the failing agent. If the agent cannot be restarted, the system keeps on working without that node, thanks to the redundant infrastructure.
If the failing agent was a leader, and it cannot be restarted, the managed agents can re-elect their leader dynamically, without losing any information.
Another example is related to failures either in a storage node or in a compute node.
If a storage node fails, thanks to the redundant deployment and to the multiple copies of the same image available in the storage cluster, the deployment of VMs can continue without issues, and the leader agent will try to restart the failing node.
If a compute node fails, the leader detects the failures and stops sending requests to that node. Moreover it tries to restart the node, forcing a fresh copy of the compute node to be re-deployed via PXE boot.
If you're interested in trying the SmartCloud Provisioning product, you can download a trial version from the following link:
With the barrage of cloud news constantly hitting the market, it can be challenging for organizations to differentiate between all of the solutions and capabilities out there.
But with the latest cloud offering from IBM, the value proposition is quite simple—you get a low-cost, low-risk entry to cloud computing with compelling features. This is especially important for organizations who are still trying to leverage the cost savings of virtualization.
Our customers have told us they’re looking to cloud computing to increase agility—the ability of IT to evolve and meet business needs—and they’re looking for ways to control expenses related to IT investments. They also want to reduce IT complexity while at the same time increase utilization, reliability and scalability of IT resources. And they are looking for the ability to expand capabilities gradually, as their needs change and grow.
In designing a solution to meet all of these needs, we developed IBM SmartCloud Provisioning. Using industry best practices for cloud deployment and management, this new solution allows organizations to quickly deploy cloud resources with automated provisioning, parallel scalability and integrated fault tolerance to increase operational efficiency and respond to user needs.
The name doesn’t tell the whole story though. IBM SmartCloud Provisioning is a full-featured solution wrapped up in an easy-to-implement package. That means you get:
- Rapidly scalable deployment designed to meet business growth
- Reliable, non-stop cloud capable of automatically tolerating and recovering from software and hardware failures
- Reduced complexity through ease of use and improve time to value
- Reduced IT labor resources with self-service requesting and highly automated operations
- Control over image sprawl and reduced business risk through rich analytics, image versioning and federated image library features
Using this technology, we’ve seen customers get a cloud up and running in just hours—realizing immediate time to value. It’s fast—administrators have been able to go from bare metal to ready-for-work in under five minutes, or start a single VM and load OS in under 10 seconds, or scale up to 50,000 VMs in an hour (50 nodes).
But ultimately, these IT benefits have translated to business benefits—customers have been able to see how cloud computing can impact their business, and how they can accelerate the delivery of new services to drive revenue.
With the new release of IBM SmartCloud Provisioning this week, you can try and see firsthand the potential of this breakthrough technology to accelerate your journey to cloud. And if you want a preview of what’s in development, you can join our Open Beta program for access to beta-level code.
Starting from December 9th 2011 IBM SmartCloud Provisioning 1.2 is available for download.
The key features introduced in this release are:
Full product install through an interactive tool:
IBM SmartCloud Provisioning can now be installed using a graphical
wizard. Two flavours of the installer minimal and custom. The custom
installation allows to specify the number of instances of HBASE and
Zookeeper to be deployed. Moreover it allows to automatically configure
ESXi servers as compute node. The creation of the management virtual
image on VMware is automated.
Support for multiple networks:
you can now deploy images with more than one NIC. Different users can deploy images in segregated networks.
Integration of the Image Construction and Composition Tool:
The Image Construction and Composition Tool
helps building and customizing master images. It is designed to
facilitate a separation of concern and tasks, where experts build software bundles for reuse by others. This design approach greatly simplifies the complexity of virtual image creation and reduces errors.
Support of Open Virualization Format (OVF):
OVF images that can be created or modified by the Image Construction and Composition Tool
OVF metadata can be displayed and modified in the Self Service UI
Integration of the Virtual Image Library component:
The Virtual Image Library helps managing the life cycle of virtual images:
-Search images for specific software products
-Compare two images and determine the differences in files and products
-Find similar images
-Track image versions and provenance
The cloud administrator can use a brand new UI to perform tasks like
registering images, registering networks, managing quotas, assigning
roles, managing elastic IPs
The IBM® Image Construction and Composition Tool is a web application that simplifies and automates virtual image creation for public and private cloud environments, shielding the differences in cloud implementations from its users.
This white paper provides Software Specialists and other product experts with helpful tips and techniques to plan, design, and create software bundles in the Image Construction and CompositionTool.
Exciting news!! We announced this week the upcoming availability of IBM Tivoli Monitoring for Virtual Environments v7.1 (formerly known as ITM for Virtual Servers). Why did we change the name? Previously, our focus was on ensuring the health of the virtual server environment - VMs & hosts and virtual storage and network elements like data store capacity, etc. With this release, we are now focused across the virtual environment to include physical network and storage performance, thus, providing a holistic view of all physical and virtual shared resources across the virtual environment. This offering will be generally available November, 23rd. Enhanced capabilities include:
- New capacity planning reports for recommendations on workload
placement, highlighting potential energy and server costs savings while
adhering to co-location policies. You can now use benchmarking data,
results simulation, and a policy framework to more intelligently assess
where workloads should be placed, instead of relying solely on resource
availability in the virtual host farm.
- Busy administrators can make rapid assessments of server, storage,
network components, showing physical and virtual performance, and
change history via default settings via a new Web 2.0 dashboard.
- Diversified virtualization investments can extend Tivoli virtual environment performance and
availability monitoring to Citrix XenApp and XenDesktop via newly
- If you have invested in the Cisco Unified Computing System (UCS)
platform, you can now monitor performance and availability attributes
of UCS systems, including chassis and blade health, network fabric
health, and storage management integration.
Check out the official announcement:
I recently found this article which discusses the rationale for cloud adoption: http://www.tmcnet.com/usubmit/2011/10/31/5893685.htm. One factor listed is capacity management - "Users are considering cloud for capacity
management issues including periodic demand peaks and better management
of data center growth, power, and cooling issues." This statement speaks to the maturity of clients surveyed who are considering cloud in that they have the visibility into their virtual environment to understand workload usage trends such as predicting peaks and projecting growth of data center resource consumption. In other words, before clients can leverage cloud for adding capacity for periodic demanding peaks, they first must have capabilities in place for visibility of existing infrastructure. I am curious to understand if any of those surveyed have optimized their virtual environment; meaning, have they rightsized their workloads and placed workloads in a way to maximize available capacity.
Would you like to show and charge
for usage of your IBM Power Systems server?
You may already be aware of the concept
of a virtualized system and virtual machines. This might be used by your organization as a means to share physical resources or form the basis for your cloud infrastructure. The usual goal of virtualization is to
centralize administrative tasks while improving scalability and work
loads. The question is how do you analyze the usage of such resources
and charge appropriately where required?
The Tivoli Usage and Accounting Manager
(TUAM) team is pleased to announce that the TUAM IBM Hardware
Management Console (HMC) collector also supports collecting usage
information from IBM Systems Director Management Console (SDMC) and facilitates analyzing, reporting, and billing based on the
usage and costs of this metering data. This provides a
means for enterprises to migrate from HMC to SDMC and ensure
continuity of showback/chargeback solutions based on TUAM. Future versions of the HMC/SDMC
collector will exploit SDMC specific features.
Capabilities of the collector include
- Ability to capture allocation (entitlements) and usage information for each LPAR, Processor Pool, Memory Pool and the overall System
- Ability to capture capped and uncapped usage and charge different amounts for each
What is IBM Systems Director
Management Console (SDMC)?
The SDMC provides hardware, service,
and virtualization management for your Power Systems server.
The SDMC is the successor to the HMC and the Integrated
Virtualization Manager (IVM), and shows how IBM Systems Director is
going to take an increasingly important role for administrators. For
more information on SDMC, see this blog.
For more information about the IBM
PowerVM HMC data collector, see the TUAM
7.3 Information Center. The collector is available as part of
the TUAM 7.3.0 Enterprise Edition Base Collector Pack.
Unlock the Value of Virtualization with Integrated Service Management Whitepaper