Pay as you go is one of the characteristics
of cloud computing service. To provide such a service we need to track the usage
data for each cloud offering. For example, in an IaaS cloud platform, the usage
data is normally the CPU, Memory, Disk, Network. For a SaaS cloud offering
like Maximo as a Service (MaaS), the usage data required to charge the cloud
offering consumer may be the business usage data, such as:
- how many assets are
managed in the cloud for how many days,
- how many service requests or work orders
Tivoli Usage and Accounting Manager (TUAM) is well-known
for its IT usage metering and chargeback capability. However,
TUAM can also be used to track the business usage data. This blog entry will
describe this TUAM solution designed for MaaS.
In this TUAM solution, the following steps
- A Maximo daily crontask is developed
and set up to retrieve the usage data from Maximo DB and to create a CSR file
with the business usage data.
- A Maximo virtual machine OS daily
crontask is set up to transfer the CSR file to the TUAM server.
- A TUAM daily job plan is set up to
process the CSR files based on the accounting information set up in the TUAM
- The existing TUAM invoice reports are leveraged to provide new reports to showback the billing information.
The following paragraphs describe
the details of each of the above steps.
In step 1, we developed
MaaSMeteringCronTask.java to collect usage metering data, and to output into a CSR file based on the format required by TUAM. The sample usage data
includes: how many assets has the status <> "DECOMMISSIONED",
how many work orders were closed in previous day, how many service requests were
opened in previous day. The usage data is recorded for each account. The
account information is composed of Tenant Company ID + Maximo Instance Name +
Maximo VM Hostname. This data is fed into the CSR file. Here is the sample CSR file:
In step 2, the CSR files will be
transferred to TUAM server for processing. In Maximo VM, we configured one Linux cron task to transfer the
In step 3, the usage data in CSR files are
processed by TUAM server. In order to do this, we first need to set up the TUAM
server including account code structure, rate group,
rate codes and job file.
The most important configuration in TUAM server for a usage based metering solution
is to develop one job XML file and configure it to run. You can find
the sample job xml file in the TUAM server you installed. In the job xml file
for MaaS, we defined the Job with ID “MaaS”, this job has one process with 5
Scan – this step scans the CSR
files with the date as a part of their name, and merges them into CurrentCSR.txt
– this step adds the account code information into the usage data record by
combining the value from TenantID, Hostname, Maxinstance with the defined
length. The output data is in file AcctCSR.txt
Process – this step is a normal
TUAM step to process the usage data to generate files Ident.txt,
BillSummary.txt and BillDetail.txt.
DatabaseLoad – this step is a
normal TUAM step to load the data from above 3 files into the database.
Cleanup – this step is to
cleanup the old CSR files. You can turn off this step if you do not want to
clean the CSR files.
In step 4, we used the out of box TUAM
invoice report to review or showback the usage based metering and billing report for MaaS.
You could develop any new reporting based on your requirements using the Cognos
reporting embedded in TUAM.
Modern Cloud infrastructures are built leveraging thousands of highly distributed servers, used to provide services directly to customers over the Internet. The service provider has two extremely important objectives, which, unfortunately, are to some degree contrasting: a) ensure continuous availability of the Cloud service, and b) contain the cost of the infrastructure and administration (CAPEX and OPEX).
There are several factors that have an impact on the availability of services, mostly related to infrastructure failures. Failures are not only related to unrecoverable hardware outages, but also to recoverable OS or middleware failures.
Not so long ago, the most common approach to high availability was to assume one could deploy infrastructures with the highest Mean Time To Failure (MTTF) possible, which required expensive systems and assumed the possibility to write error-safe software applications. It was also assumed that some degree of down-time was acceptable, with vendors boasting of the number of 9's that they could support (e.g. 99.999% availability). In today's always-on Internet, any downtime of major services becomes headline news. The traditional approach is no longer applicable, and a new approach has to be considered.
Given the requirement to reduce infrastructure costs, service providers are using commodity hardware. Given also the requirement to reduce operational costs, hardware failures are commonly dealt with by directly replacing the failed component rather than manual debugging and recovery by skilled (and expensive) administrators. Thus, to maintain the objective of continuous availability of the service, the Cloud system must be built in order to expect failure of the underlying infrastructure, and not only for temporary periods but it must assume that components will disappear forever. This cannot be limited to only hardware components, as no matter how well a software element is tested, unexpected edge conditions will appear at some point-in-time. So, to guarantee continuous availability, a Cloud solution must also expect its own components to fail too.
Given that we are forced to expect failure, the high MTTF approach is no longer valid, and instead we have to increase availability by flipping the approach to minimizing Mean Time To Recovery (MTTR). The quicker the system can recover from failure, the higher the availability of the service will be. Given however that even a tiny percentage of downtime is no longer acceptable, we also need a means to maintain service availability during the recovery process. One way of doing this is through providing redundancy of all critical services within the Cloud solution.
SmartCloud Provisioning is designed according to the ROC principles, because it is based on a highly distributed, redundant and robust infrastructure, with near zero downtime, and automated recovery across heterogeneous platforms, and it does not require expensive systems, but can run on a relatively low-cost commodity infrastructure.
The key factors that allow SmartCloud Provisioning to be a low-touch and robust cloud infrastructure are the following:
the infrastructure is as stateless as possible: this avoids issues related to single points of failure
management agents are deployed on the physical nodes of the infrastructure (compute nodes and storage nodes) and are connected in a peer-to-peer network to form a self-monitoring and self-managing infrastructure
core services are redundant being deployed in clusters to tolerate individual faults
master images are replicated in multiple copies across the storage nodes in the storage cluster; this tolerates HW failures of the storage nodes in the cluster as well as network failures when accessing one copy of the image
hypervisor (compute) nodes are deployed via a stateless boot so that it becomes easier to re-deploy a failing hypervisor by simply rebooting it and getting a fresh new copy of the hypervisor image. This also allows easy deployment of new nodes if needed, to augment the capacity of the infrastructure
Let's consider some typical failure scenarios that can happen in a real environment and let's see how the SmartCloud Provisioning is designed to tolerate them and react appropriately.
First example is related to the management agents that are used by SmartCloud Provisioning to perform the standard provisioning operations.
Management agents are deployed on both the compute nodes and the storage nodes and are organized in dynamic hyerarchies, where a leader (manager) is dynamically elected. The leader is just the entry point for distributing the requests across the infrastructure and a coordinator of any operation, but this role does not imply any special information being associated with the agent itself (stateless infrastructure): any agent can be a leader.
All the agents have a watch-dog mechanism that is used to prevent, detect and correct failures; they also monitor each other in the neighborhood and can start simple actions to fix other agents issues.
So, if an agent fails, the watch-dog mechanism tries to restart it. If the watch-dog is not able to restart the agent, neighbours try some simple actions to restart the failing agent. If the agent cannot be restarted, the system keeps on working without that node, thanks to the redundant infrastructure.
If the failing agent was a leader, and it cannot be restarted, the managed agents can re-elect their leader dynamically, without losing any information.
Another example is related to failures either in a storage node or in a compute node.
If a storage node fails, thanks to the redundant deployment and to the multiple copies of the same image available in the storage cluster, the deployment of VMs can continue without issues, and the leader agent will try to restart the failing node.
If a compute node fails, the leader detects the failures and stops sending requests to that node. Moreover it tries to restart the node, forcing a fresh copy of the compute node to be re-deployed via PXE boot.
If you're interested in trying the SmartCloud Provisioning product, you can download a trial version from the following link:
IBM Tivoli Monitoring for Virtual Environments V7.1 is now available.
At a glance:
IBM® Tivoli® Monitoring for Virtual Environments V7.1 extends the benefits of end-to-end performance monitoring in a virtualized environment by providing additional hypervisor support and new capacity planning reports.
• New Web 2.0 dashboard
• New Cisco UCS monitoring agent
• New Citrix XenDesktop and XenApp performance and availability monitoring
• New capacity analytics and workload placement guidance for VMware
IBM Tivoli Monitoring for Virtual Environments V7.1 extends the benefits of end-toendperformance monitoring in a virtualized environment by providing sophisticated capacity analytics and workload placement guidance that allow you to safely maximize the density of virtual hosts. By using this insight and guidance, you are more likely to realize the promised cost savings of virtualization by eliminating the uncertainty that often accompanies a migration from physical application servers to virtual ones. Policy-driven analytics for VMware environments do not simply place virtual machines on the "least busy" hosts, but rather place them where those workloads will function best across a range of performance and compliance conditions. New dashboards will then allow operators to conveniently assess the overall health of the newly tuned virtual infrastructure.
With the barrage of cloud news constantly hitting the market, it can be challenging for organizations to differentiate between all of the solutions and capabilities out there.
But with the latest cloud offering from IBM, the value proposition is quite simple—you get a low-cost, low-risk entry to cloud computing with compelling features. This is especially important for organizations who are still trying to leverage the cost savings of virtualization.
Our customers have told us they’re looking to cloud computing to increase agility—the ability of IT to evolve and meet business needs—and they’re looking for ways to control expenses related to IT investments. They also want to reduce IT complexity while at the same time increase utilization, reliability and scalability of IT resources. And they are looking for the ability to expand capabilities gradually, as their needs change and grow.
In designing a solution to meet all of these needs, we developed IBM SmartCloud Provisioning. Using industry best practices for cloud deployment and management, this new solution allows organizations to quickly deploy cloud resources with automated provisioning, parallel scalability and integrated fault tolerance to increase operational efficiency and respond to user needs.
The name doesn’t tell the whole story though. IBM SmartCloud Provisioning is a full-featured solution wrapped up in an easy-to-implement package. That means you get:
- Rapidly scalable deployment designed to meet business growth
- Reliable, non-stop cloud capable of automatically tolerating and recovering from software and hardware failures
- Reduced complexity through ease of use and improve time to value
- Reduced IT labor resources with self-service requesting and highly automated operations
- Control over image sprawl and reduced business risk through rich analytics, image versioning and federated image library features
Using this technology, we’ve seen customers get a cloud up and running in just hours—realizing immediate time to value. It’s fast—administrators have been able to go from bare metal to ready-for-work in under five minutes, or start a single VM and load OS in under 10 seconds, or scale up to 50,000 VMs in an hour (50 nodes).
But ultimately, these IT benefits have translated to business benefits—customers have been able to see how cloud computing can impact their business, and how they can accelerate the delivery of new services to drive revenue.
With the new release of IBM SmartCloud Provisioning this week, you can try and see firsthand the potential of this breakthrough technology to accelerate your journey to cloud. And if you want a preview of what’s in development, you can join our Open Beta program for access to beta-level code.
Starting from December 9th 2011 IBM SmartCloud Provisioning 1.2 is available for download.
The key features introduced in this release are:
Full product install through an interactive tool:
IBM SmartCloud Provisioning can now be installed using a graphical
wizard. Two flavours of the installer minimal and custom. The custom
installation allows to specify the number of instances of HBASE and
Zookeeper to be deployed. Moreover it allows to automatically configure
ESXi servers as compute node. The creation of the management virtual
image on VMware is automated.
Support for multiple networks:
you can now deploy images with more than one NIC. Different users can deploy images in segregated networks.
Integration of the Image Construction and Composition Tool:
The Image Construction and Composition Tool
helps building and customizing master images. It is designed to
facilitate a separation of concern and tasks, where experts build software bundles for reuse by others. This design approach greatly simplifies the complexity of virtual image creation and reduces errors.
Support of Open Virualization Format (OVF):
OVF images that can be created or modified by the Image Construction and Composition Tool
OVF metadata can be displayed and modified in the Self Service UI
Integration of the Virtual Image Library component:
The Virtual Image Library helps managing the life cycle of virtual images:
-Search images for specific software products
-Compare two images and determine the differences in files and products
-Find similar images
-Track image versions and provenance
The cloud administrator can use a brand new UI to perform tasks like
registering images, registering networks, managing quotas, assigning
roles, managing elastic IPs
The IBM® Image Construction and Composition Tool is a web application that simplifies and automates virtual image creation for public and private cloud environments, shielding the differences in cloud implementations from its users.
This white paper provides Software Specialists and other product experts with helpful tips and techniques to plan, design, and create software bundles in the Image Construction and CompositionTool.
I've recorded a couple demo movies to show the capabilities of the new IBM Virtual Image Library v1.1 that comes with the SmartCloud Provisioning v1.2 product. You can use the links below to go directly to the movies:
The DBMS placement in Cloud Solutions based on Tivoli Provisioning Manager (TPM) / Service Automation Manager (TSAM) / Service Delivery Manager (ISDM), plays a significant role into overall product function, performance, and how this relates to the evolving workload.
A typical setup approach is to install TPM/TSAM with the DBMS co-located.
This is the default setup option in the TSAM installation and TSAM-VM-image which is included in the ISDM solution.
Over time, based on increasing workload, capacity planning, or production requirements, it may be desirable to move the local database to a remote node, with the goal to achieve greater scale and to exploit additional resources.
A white paper
is available for this purpose in the Integrated Service Management Library.
The referenced paper, has been recently updated to version 2.4, and describes how to relocate the DBMS in existing TPM / TSAM / ISDM solutions.
A very interesting Cloud Computing case study of the Capgemini Infrastructure as a Service delivery platform project has been recently published on the Web:
The case study shows how one of the world’s leading infrastructure outsourcing providers has seen the business opportunity of offering to its clients a cloud-based solution that combines the benefits of a high-value infrastructure service provider with the cost advantages of Cloud computing. Capgemini focused the new cloud based services on delivering to their clients Infrastructure as a Service capabilities with much higher flexibility and substantial cost-efficiency.
In partnership with IBM, Capgemini built a fully integrated cloud delivery platform for clients in the UK and USA leveraging the Tivoli Service Delivery Manager solution that includes the IBM Tivoli Service Automation Manager, Tivoli Monitoring and Tivoli Usage Accounting Manager products. On top of the IBM hardware BladeCenter HS22V and XIV Storage System technologies.
The key aspects of the solution built by Capgemini has been:
- Implementation of a resilient and scalable global infrastructure with capability of managing resource pools in different regions and with a modular design for quick scale out
- Single solution that enables to manage a wide range of platforms and architectures that does not tie to any specific hardware technology or vendor. Ability to choose the right hypervisor and guest OS platforms for the right workload
- Multi-Customer shared infrastructure providing secure network separation between customer environments
- Automation of network management and configuration that enables to support multiple network domains per customer and linkage to the customers private networks
- Extensible service catalog to fit the needs of the Capgemini customers
- Ability to quickly on-board existing Capgemini customer workloads.
IBM® Tivoli® Service Automation
Manager (TSAM) has delivered yet another cloud extension that provides service
offerings for automating the provisioning of network attached storage (NAS)
with an NFS export name. The file systems can then be mounted into virtual
machines provisioned within TSAM Virtual Servers Projects. The
extension introduces the concept of Storage-only Project, which
allows managing the entire life-cycle of the file systems (create, expand, set
access, and destroy), in a secure multi-tenant environment. It works in
integration with IBM N series and NetApp FAS series
storage systems as sketched in the picture below.
Once you download the installation
package from the Integrated Service Management Library (http://www.ibm.com/software/ismlibrary?NavCode=1TW10TS0F) and install it on top of TSAM 7.2.2
platform, your cloud administrator can easily configure the Extension for
Network Attached Storage to provision NFS-mountable file systems. In fact, the
extension provides a plug-in to the Cloud Storage Pool Administration
TSAM application where she can enter the hostname of the workstation running the
OnCommand NetApp management software, and the credentials to
access it. Then the extension automatically discovers all the storage resources (NetApp
Datasets) from the underlying storage systems and makes them visible as
TSAM Storage Pools. At that point the cloud administrator can regulate
access to the storage resources using the TSAM way of associating storage pools
and quotas to customers,
and that’s it, the extension is configured. Now you can delegate to your
customers the management of storage up to the assigned quota: the customer
administrators can start requesting storage for their virtual servers by
creating storage projects and add, expand, and delete file systems. The entry
point for this is the Tivoli Self Service Station – Storage Management folder
(showed in the picture below).
the Create Storage Project offerings brings a simple user interface for
requesting file systems and assigning them to teams of users (see example pictures
The customer administrator has to
enter a prefix for the NFS export name, a TSAM Storage Pool from which to carve
the storage, and the size of the file system, that’s it. She can decide to
create many file systems with same characteristics by increasing the value of
the “Number” spin control. She can decide to make the file systems available to
all the teams of the customer by checking the “Access to All Teams” box: by
default the storage is only visible to the team of users that owns the storage
Note that once the storage project
has been created, the file systems cannot be mounted yet into virtual servers because
there is no ACL set on the IBM N series boxes for them. To do so, the customer
administrator creates TSAM Projects with Virtual Servers, and associates file
systems to the virtual machines belonging to the project: the extension
automatically updates the access control list (ACL) of the NFS export name
adding the IP address of the virtual machines. When the user logs in, she can
mount the file systems and use them (she gets the information of the NFS export
name with a notification e-mail).
In summary, the predefined functions
that you get with the TSAM Extension for NAS storage are:
Service offerings for managing the entire life-cycle (create, expand,
destroy, set access) of shared file systems accessible with the NFS protocol;
Service offering for authorizing virtual servers to mount storage;
Administrative graphical user interface for discovering NetApp Datasets
into TSAM Storage Pools and restricting usage by customer.
There are no predefined features to
create and manage NetApp Datasets neither vFilers to create customers silos.
For example, what if you want to automate the creation of a vFiler and of a
couple of storage pools – gold and silver, upon on-boarding of a new customer?
There are no predefined features to authorize
the shared file systems to anything but a virtual server within virtual servers’
project. What if you want to automatically attach a file system to a VMWare Cluster
as backend data store for VM images upon creation in a storage project?
Well, the TSAM Extension for NAS
storage provides low-level Tivoli Provisioning Manager (TPM) Workflows and
Tivoli Platform Automation engine (TPAe) Runbooks that can be used to implement
such automations in custom extensions that you can write based on best
practices described in the TSAM platform extensibility guide.
Exciting news!! We announced this week the upcoming availability of IBM Tivoli Monitoring for Virtual Environments v7.1 (formerly known as ITM for Virtual Servers). Why did we change the name? Previously, our focus was on ensuring the health of the virtual server environment - VMs & hosts and virtual storage and network elements like data store capacity, etc. With this release, we are now focused across the virtual environment to include physical network and storage performance, thus, providing a holistic view of all physical and virtual shared resources across the virtual environment. This offering will be generally available November, 23rd. Enhanced capabilities include:
- New capacity planning reports for recommendations on workload
placement, highlighting potential energy and server costs savings while
adhering to co-location policies. You can now use benchmarking data,
results simulation, and a policy framework to more intelligently assess
where workloads should be placed, instead of relying solely on resource
availability in the virtual host farm.
- Busy administrators can make rapid assessments of server, storage,
network components, showing physical and virtual performance, and
change history via default settings via a new Web 2.0 dashboard.
- Diversified virtualization investments can extend Tivoli virtual environment performance and
availability monitoring to Citrix XenApp and XenDesktop via newly
- If you have invested in the Cisco Unified Computing System (UCS)
platform, you can now monitor performance and availability attributes
of UCS systems, including chassis and blade health, network fabric
health, and storage management integration.
Check out the official announcement:
We’re getting really good at deploying images. The new SmartCloud Provisioning product makes
image deployment faster and easier then ever.
While the speed and simplicity is cool, left unchecked, image sprawl
issues may catch up with you faster than ever.
Virtual image sprawl is a reasonably new phenomena derived
from the ease of capturing and creating new virtual images. Virtualization and cloud computing make it
very easy to create new virtual images.
As image catalogs grow, finding and locating the right images gets
harder. Existing images quickly become
out of date. Creating a new image is
often easier than figuring out what existing image might be reusable. This all leads to a sprawl of images, and
corresponding management issues.
To control, and proactively prevent image sprawl, we just added
two new capabilities, the Virtual Image Library and the Image Construction and
Composition Tool, into the SmartCloud Provisioning 1.2 beta program. The Virtual Image Library provides a central
view of all your images and instances – across any SmartCloud Provisioning
deployment as well as your existing VMware environments. With Virtual Image Library you can quickly understand
the content of your images, search, and run comparison reports for both
differences and similarities. This will help
you find images to reuse (instead of creating yet another image), and begins to
proactively identify consolidation candidates.
In addition, Virtual Image Library supports a central repository for
your master images, allowing you to perform version control, check-in and check-out
operations across your different environments.
While image library helps you control and manage your
images, the Image Construction and Composition Tool is a proactive step to prevent
image sprawl. With the tool, you
can construct images to share and reuse
across your cloud. The tool makes it
easy to create an image that is reconfigurable during the deployment
process. You can choose to expose
configuration parameters such as user names and ports, and even different
configuration choices. The SmartCloud Provisioning
1.2 instance creation dialog automatically displays these parameters and passes
them through to run your customization scripts.
For example, we use this technique to have one WebSphere Application
Server image that at deploy time is configured as a stand-alone node, or a
custom node, or a deployment manager node, or even an IBM HTTP Server node -
all from the same image. In addition to
building images for Smart Cloud Provisioning 1.2, the tool builds images on the
SmartCloud Enterprise public cloud, and builds image for combination in virtual
system patterns using IBM Workload Deployer.
I hope you’ll take a look at these new beta capabilities and
provide feedback on the SmartCloud Provisioning Open Beta Forum. Let's tame the image sprawl monster.
Most generally accepted definitions of Cloud Computing imply the notion of Pay per use. For a Service Provider this means defining how they intend to bill for Cloud Services, while for a Cloud enabled DataCentre in the enterprise this implies some form of showback/chargeback model. As for those consumers actually using the Cloud, they want to understand the financial implications (what will it cost?) before committing their workloads to it.
As a Cloud User
- Do you want to see what your project will cost before you provision it?
- See a price list for all the services you can provision - comparing prices for different options?
- Use a calculator to help you predict what a project will cost per month (or day or year)?
- See what the effect of changing the resources used by a project will do to the cost?
As a Cloud Provider
- Do you want to define different prices for a Service depending on the options that the user chooses?
- Set different prices for each service for different customer groups?
The following screenshots illustrate how the new cloud cost management
capability delivers solutions to these problems. The new TSAM Extension for Usage and Accounting is available to download now via the ISM Library
See the Prices for the different Cloud offerings and compare different options
first dropdown in the view shown below shows the Offerings that are available to the customer.
Offerings can be anything the Cloud provider chooses to make available, for example: Virtual Servers, Storage or even PaaS or Saas offerings. The consumer can see up front what the different rates are for each component, and compare these across different offering types..
See what it would cost per month to run a new project in the Cloud
In this example, we want to have one machine to run an Application Server and one machine to run a Database and we need additional Tier1 storage in order to store the database data. The calculator shows how much this will cost per month overall and in terms of the two Service Offerings that this particular Cloud provides..
.Different customers can be assigned to different subscriptions
A subscription is a means to segment your customers into different groups such as by geography or customer type (direct, business partners etc).
In this example, the RATIONAL and TIVOLI customers are assigned to the US (United States) subscription. Customers with this subscription share the same set of available offerings and pay the same price for those offerings..
Offerings are defined once and then added to Subscriptions
Once they are part of a subscription, the actual rate values (price per unit) can be defined for each element of the offering template.
If you wish to join the TUAM group
to get more involved in reviewing new features and testing beta capability, then let me know and I can send you an invite.
New extensions released
for TSAM 7.2.2 extend core capabilities and offer customers
secure customer networks
response, effectiveness and adaptation to Cloud users
storage costs managing shared virtual file systems
Extension for Juniper
- Extends TSAM Network device
support with Juniper firewall and F5 Big-IP Load balancer
- Increases security by creating
firewall configuration rules within TSAM
- Increases resource utilization
by setting Load balancer policies within TSAM
Virtual Disk Extension
- Enables customers to add/delete
additional virtual disks to the projects/VMs, within the Multi-customer
- Improves efficiency by cloning
- Provisions additional storage
- Provides additional flexibility
in service offering for storage in Compute-as-a-Service and
- Extends coverage beyond vmdk to
external storage systems such as SONAS and NetApp (NSeries)
Power is supported within Tivoli Service Automation Manager
Load Balancer Extension
Load balancing is one of the key values
of any cloud project. The load balancer extension enables the definition of
rules to automatically distribute the workload amongst VMs in the project whilst providing a single entry
point (Virtual-IP) to external users (i.e. it presents itself as a single
powerful machine to the user). Key Features include:
- Reserve/Release Virtual IPs –Virtual IPs (VIPs) can be reserved on a
project subnet so that load balancer policies can be created between
- Create/Modify/Delete a Load
- Policies can
be created, which are used to reach an application running on a pool of VMs
- VIPs and
ports are associated to the VMs that
will run the application
BIG-IP device parameters can be set (e.g. load balancing algorithm)
Two more TSAM extensions for Netapp Storage and Costing Preview will be available in December 2011.
The TSAM 7.2.2 extensions are available free of charge and can be
downloaded from the IBM Service Management Library using the links bellow.
Network Extension for Juniper – Download here
Additional Virtual Disk Extension – Download here
Load Balancer Extension – Download here
If you are interested in attending our daily demo sessions (see https://www.ibm.com/developerworks/mydeveloperworks/blogs/9e696bfa-94af-4f5a-ab50-c955cca76fd0/entry/new_schedule_and_agenda_for_daily_demo_sessions_of_ibm_smartcloud_provisioning2?lang=en) but:
- you do not feel comfortable with our schedule
- you would like to discuss with us about functionalities that are not covered by the current agenda
- you would like to join an exclusive, fully dedicated to you usability session
Please post your request on the open beta forum