Modified on by cynthyap
Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.
But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”
The need to orchestrate really becomes clear when various aspects of cloud management are brought together. The value to the organization at this stage of cloud is simplifying the management of automation – otherwise a balancing act to manage multiple hypervisors, resource usage, availability, scalability, performance and more -- based on business needs from the cloud, with the ultimate goal of delivering services faster.
With orchestration, the pieces are woven together and can be managed more effectively to ensure smooth and rapid service delivery -- and delivered in a user-friendly catalog of services easily accessible through a single pane of glass. In essence, cloud orchestration = automation + integration + best practices.
Without cloud orchestration, it’s difficult to realize the full benefits of cloud computing. The stitching together of best practices and automated tasks and processes becomes essential to optimize a wide spectrum of workloads types.
In addition to rapid service delivery, the benefit of orchestration is that there can be significant cost savings associated with labor and resources by eliminating manual intervention and management of varied IT resources or services.
Some key traits of cloud orchestration include:
• Integration of cloud capabilities across heterogeneous environments and infrastructures to simplify, automate and optimize service deployment
• Self-service portal for selection of cloud services, including storage and networking, from a predefined menu of offerings
• Reduced need for intervention to allow lower ratio of administrators to physical and virtual servers
• Automated high-scale provisioning and de-provisioning of resources with policy-based tools to manage virtual machine sprawl by reclaiming resources automatically
• Ability to integrate workflows and approval chains across technology silos to improve collaboration and reduce delays
• Real-time monitoring of physical and virtual cloud resources, as well as usage and accounting chargeback capabilities to track and optimize system usage
• Prepackaged automation templates and workflows for most common resource types to ease adoption of best practices and minimize transition time
In short, many of the capabilities that we associate with cloud computing are really elements of orchestration. In an orchestrated environment, organizations gain tools to manage their cloud workloads through a single interface, providing greater efficiency, control and scalability. As cloud environments become more complex and organizations seek greater benefit from their computing resources, the need for sophisticated management solutions that can orchestrate across the entire environment will become ever clearer.
Learn more about how cloud orchestration capabilities can help your business. And join the Cloud Provisioning and Orchestration development community to test out the latest cloud solutions and provide feedback to impact development.
As part of the transparent development initiative, IBM SmartCloud Provisioning (formerly known as IBM Service Agility Accelerator for Cloud) launches a series of daily demos, starting from November 7th. Every session will take about one hour.
In this way you can have a look in almost real time at what is happening in IBM SmartCloud Provisioning development, learn about new and enhanced capabilities.
If you are interested in joining the sessions, here is the schedule in Central European Time (CET):
- Monday at 4:00 PM
- Tuesday at 11:00 AM
- Wednesday at 4:00 PM
- Thursday at 5:00 PM
- Friday at 11:00 AM
The sessions will be focused on image management.
If you would like to join, using your web browser, connect to
No password is required
Tivoli Usage and Accounting Manager (TUAM) development are pleased to announce the release of the IBM® Tivoli® Service Automation Manager (TSAM) - Extension for Usage and Accounting v1.0.
This TSAM extension delivers cloud cost management capability by enhancing the integration, reporting and services between TUAM and TSAM. The extension allows cloud users to view historical invoice reports that show the charges associated with each project.
The Usage and Accounting v1.0 extension provides the following features:
- Easier Cloud Usage Report Access - Enabling Cloud users to access and view historical Usage and Accounting Manager Cognos reports directly from TSAM. Single sign on is configured between the two systems to allow for easier report access.
- Role-based Report Security - Security access can now be configured to ensure that users that belong to the TSAM Cloud security groups can only access the TUAM Cognos reports that they are assigned to. For example, users that belong to the Cloud Customer and Cloud Team administrator user groups in TSAM can now be assigned access to specific TUAM Cognos reports.
- Account Code Report Security - Account code security is used for customer and team reporting data segregation based on cloud roles in TSAM. This is achieved by data synchronization between TSAM and TUAM which involves aligning TSAM entities such as customers, teams, security groups and users with TUAM entities such as clients, users and user groups. After the synchronization process has completed, account code security is applied to the reports that TSAM users access.
The following table shows the evolution in the TSAM/TUAM integration. .
The diagram below show how the Usage and Accounting v1.0 extension facilitates the integration between TSAM and TUAM.
For more information about the Usage and Accounting v1.0 extension, log on to the Information Center
The extension is available free of charge and is part of the TUAM 126.96.36.199 FixPack, which is available on Fix Central
: A Rates Preview and Charges Preview of costs is available now on the ISM Library
as fully supported.
Modified on by Nimesh Bhatia
IBM made a significant commitment to OpenStack by joining the OpenStack Foundation as a Platinum Member. The IBM SmartCloud Orchestrator v2.2 product has adopted OpenStack to provide enterprises the functionality needed to effectively create and manage their cloud implementations.
The IBM Cloud Labs team is innovating in the area of cloud analytics. A new feature has been created named Information Hub for SmartCloud Orchestrator that adds exciting new reporting dashboards. The new feature will be available as an add-on at ISM Cloud MarketPlace.
The Information Hub dashboard has been designed for cloud users, administrators, planners and decision makers to provide information about the cloud infrastructure at their finger tips. It provides usage trend graphs, determines when a critical resource will run out, and aggregates the information for multi-OpenStack environments. Additionally, the information is made available for mobile devices.
These capabilities improve the productivity for cloud users and administrators. It helps cloud capacity planners to see the pace of cloud adoption in the enterprise and plan ahead. Decision makers can take the information with them and make informed business decisions about the cloud infrastructure.
Modified on by alucches
IBM SmartCloud Orchestrator, the first new private cloud offering based on OpenStack and other cloud standards, is now available. Users are looking for Cloud solutions that increase agility, cost savings and offer a competitive advantage. IBM SmartCloud Orchestrator exceeds those needs:
Patterns of expertise learned from decades of successful Client and Partner Engagements - SmartCloud Orchestrator captures best practices for complex tasks, abstracted not hardcoded. Built in best practices KPIs, Measurement & Policies in the patterns to allow for semi-automated or automated vertical scaling up & down. Deploy applications rapidly with repeatable patterns across private and public clouds: SmartCloud Orchestrator enables third-party software deployments and custom pattern creation to “build once” and deploy across private and public clouds.
Robust, automated, high scale cloud provisioning - requested VMs will be up and running in under a minute using standard hardware
SmartCloud Orchestrator includes OpenStack!
End to End Orchestration, bridging domains, cloud, infrastructure, back-end integration, processes, service processes, etc. Dynamic at runtime to ensure you have the latest Human and Automated Interaction.
Lower operational costs by leveraging existing hardware and hypervisor - single management platform across different infrastructures reduces complexity and operational cost. Integrates compute, network, storage and application delivery: enable organizational integration
Get started today!
SmartCloud Orchestrator Analyst and PressCoverage:
Modified on by b1stern
I wanted to let everyone know that a Trial Virtual Machine is available for the SmartCloud Monitoring version 7.2 FP1 product. The Trial provides a 90 day trial of the software to monitor your virtualized environment and includes the Capacity Planning tools for VMware and PowerVM. These tools can help you optimize your virtualized environment and save money.
Within a few hours you can have the Virtual Machine up and running and monitoring your Virtualized environment.
This is a great tool if you are working with a customer on a Proof of Concept. Or, if you are customer, it is a really quick and easy way to evaluate the software.
The Trial includes the SmartCloud Monitoring product plus a little bit of extra content. It includes monitoring for:
PowerVM including (OS, VIOS, CEC, and HMC)
Citrix XenApp, XenDesktop, XenServer
Log File Monitoring
Agent Based and Agent-less Operating System monitoring
Integration with Tivoli Storage Productivity Center
Integration with IBM Systems Director
The trial also includes Predictive Analytics, Capacity Planning and Optimization for VMware and PowerVM
You can find the software at the following URL: https://www.ibm.com/services/forms/preLogin.do?source=swg-ibmscmpcvi2
If you have any questions or need assistance, you can send me an email at firstname.lastname@example.org
new white papers are available on the IBM Integrated Service Management
Library ( ISML ) that explain how to use Tivoli Storage Manager to back
up different areas within IBM SmartCloud Provisioning.
first white paper provides information on how to use Tivoli Storage
Manager Backup-Archive client to back up and restore the boot volume of
an IBM SmartCloud Provisioning persistent virtual machine and how to
make periodic back ups of a normal volume, and select and restore a
white paper can be downloaded from the IBM Integrated Service
Management Library( ISML ) following this link -> Backing up IBM SmartCloud Provisioning's Persistent Volumes with Tivoli Storage Manager Client
second white paper provides information on how to use Tivoli Storage
Manager Backup-Archive client to back up and restore the following
components of the IBM SmartCloud Provisioning infrastructure: the
Preboot Execution Environment ( PXE ) server, the web console
configuration, and the HBase data store.
white paper can be downloaded from the IBM Integrated Service
Management Library( ISML ) following this link -> Backing up IBM SmartCloud Provisioning's Infrastructure with Tivoli Storage Manager Client
We know that cloud computing offers a myriad of benefits like rapid service delivery and lower operating costs. But it can also lead to challenges in data governance, access control, activity monitoring and visibility of dynamic resources—in essence, all aspects of IT security.
The IT organization must have the capabilities to both deliver services more quickly to meet the demands of the business and be able to provide high levels of security and compliance. In the past the delivery of the services was typically the bottleneck in providing new services, but now with automated cloud and self service delivery models the teams responsible for change management and security can quickly become the bottleneck due to manual processes and siloed tools.
For example, organizations need the ability to patch all of their systems, both physical and virtual, whether distributed or part of a cloud. Operations teams need better insight into and control of deployed virtual systems, including OS patch levels, installed middleware applications and related security configurations. And there can be too many security exposures with offline and suspended VM’s that haven’t been patched in weeks or months.
A holistic approach is needed that addresses rapid provisioning of services and automation of key security and compliance requirements. Together these capabilities can keep you in control of rapidly changing cloud environments. First let’s look at the capabilities needed in a cloud provisioning solution.
Cloud provisioning should combine application and image provisioning for workload optimized clouds and deliver:
· Reduced costs with automated high-scale provisioning; multiple hypervisor options and HW of choice
· Accelerated time-to-market with standardized pattern-based deployment for workload optimized cloud
· Image sprawl prevention with in-built advanced image lifecycle management capabilities
· Ease of adoption and clear roadmap to move to advanced cloud capabilities
Second, a unified endpoint management approach is required to provide visibility and control of your systems, regardless of context, location or connectivity, and needs to deliver:
· Heterogeneous platform support with seamless patch management for multiple operating systems, including Microsoft Windows, Unix, Linux and Mac OS, as well as hypervisor platforms
· Automatic assessment and “single click” remediation, which shortens time to compliance by automatically identifying necessary patches and enabling users to target and remediate endpoints quickly
· Enterprise-class scalability and security to provide proven scalability, including fine-grained authorization and access control capabilities
Explore these capabilities with the new IBM SmartCloud Patch Management.
With December's release of IBM SmartCloud Monitoring, Tivoli's venerable IBM Tivoli Monitoring product family, proven in data centers at the world's largest corporations, begins to adopt a "Cloud" posture. Sure, "Cloud" is a term bereft of a clear operational definition that we can apply at any given moment, and customers, analysts and vendors tend to bandy it about pretty freely these days. However, if we don't get too hung up what Cloud is or isn't, we can probably agree that it represents a migration from our traditional server-delivered infrastructure to one comprised of pooled computing resources shared by virtual workloads. Whether or not our customers are calling their virtualized environments "private clouds" today, and whether or not they've got a "cloud budget" that they're using for such initiatives, the fact that they're moving along the cloud maturity continuum at some pace seems inescapable, given IDC's assertion that we crossed the magical "50%" boundary last year, when half of all corporate workloads were running on virtual machines instead of physical ones.
If we're beginning to think in terms of clouds of pooled computing resources, it makes sense that we begin to deliver management solutions in the same way, right? If the server administrators, storage administrators and network administrators now report to a cloud administrator, we should begin to package solutions for those cloud administrators, combining multiple pieces of management technology into a single part number that customers can purchase and deploy. That's exactly what we've done with SmartCloud Monitoring. The discrete monitoring agents that are at the heart of IBM Tivoli Monitoring; OS monitors, application monitors, storage, etc., are as important as they ever were. Even though we're pooling those resources across virtual machines, we still have to monitor things like processes, CPU activity, IO throughput, and so on. We just need to add a layer on top of all that granular detail, so the cloud administrator can see, at a glance, what's healthy or unhealthy about his cloud environment, before drilling down into the nuts and bolts.
SmartCloud monitoring combines the VMware virtualization management features in ITM for Virtual Environments with virtual machine instance monitoring from ITM's operating system agents, to monitor a cloud infrastructure and the workloads running on it.
Our roadmap looks like an analyst's cloud maturity ladder, adding features such as automated provisioning, usage and accounting integration, and more detailed network monitoring, so our solution will "mature" along with the market, and customers' needs. See if the challenges along this ladder look like things that you or your customer have faced on their cloud journey, or are grappling with now. It's important to note that Tivoli has solutions that can be applied to each step, and for each problem. What SmartCloud promises is a way to bring those solutions together into more consumable bundles, tightly integrated together, to make cloud management simple to purchase and simple to deploy.
SmartCloud Monitoring delivers key capabilities for optimizing and maintaining a private cloud, including:
- Health dashboards, to provide an instant, consolidated glimpse into cloud health
- Topology views of the key interrelated components of the cloud
- Reports on the health trends of cloud components and workloads, powered by Cognos
- What-If capacity planning scenariosPolicy-Based optimization to put workloads where they’ll perform best, not just where they’ll fit
- Performance Analytics for right-sizing of virtual machines
- Integration with industry-leading Tivoli service management portfolio
Service Health for IBM SmartCloud Provisioning has officially GA'ed and is now available on IBM Integrated Service Management Library ( ISML ).
Service Health provides pre-built integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring utilizing a custom agent, OS agents, and the ITMfVE agents. A product provided navigator offers a concise overview on the health of the IBM SmartCloud Provisioning infrastructure enabling the ability to identify and react to issues in your environment quickly minimizing the impact, such as an unresponsive compute node, high disk usage on storage nodes or key kernel services not responding. It also provides visibility into the KVM and ESXi hyper-visors.
This solution can be downloaded from the IBM Integrated Service Management Library( ISML ) following this link -> Service Health for IBM SmartCloud Provisioning
With the proliferation of cloud computing, many businesses are starting to adopt a service provider model—either as a deliberate strategy to establish new revenue streams or, in some cases, inadvertently to support the growing needs of their organizations. This is especially true for companies with diverse needs, whether they’re tech companies with dev teams churning out new apps and services, or business owners driving requirements for SaaS services and cloud capabilities to enhance their data center operations.
In any event, the distinction between managed service providers (MSP) or cloud service providers (CSP), and companies growing in-house capabilities may not be as important as the common need to respond quickly and scale to support customer needs. The challenges facing all of these companies include facilitating the creation of new applications and services while maintaining quality of service, and the need for automation to reduce human resources and error from manual tasks—all with an eye to drive revenue and acquire new customers.
And so, the challenge for service providers of any kind is to increase scalability, automation and uptime while constraining costs. Companies are increasingly solving the critical piece of this puzzle by embracing rapid, high-scale provisioning and key cloud management capabilities to allow them to grow as quickly as their customers’ needs. In particular, the benefits accrue in four key areas.
First, applications can be deployed rapidly across private and public cloud resources.
Second, rich image management tools simplify complex and time consuming processes for creating virtual images and constraining image sprawl.
Third, operational costs can be lowered by leveraging existing hardware to support an array of virtual servers and diverse hypervisors.
And fourth, high-scale provisioning enables rapid response to changing business needs with near-instant deployment of hundreds of virtual machines.
While the spectrum of virtualization to orchestration functionality helps to manage their environments, high-scale provisioning in particular offers a cost-effective way to leverage capacity as a business commodity—a way for service providers to offer seemingly limitless capacity to their customers while lowering the relative costs of providing it.
In the case of Dutch Cloud, a CSP based in the Netherlands, a growing client base allowed the company to expand but it was very conscious of the costs and issues related to scalability, performance and security. By adopting a lightweight, high-scale provisioning solution for core service delivery, Dutch Cloud added capacity easily and was able to scale up rapidly without interruption to customer service. The CSP also reduced its administrative workload by 70 percent by adopting automation best practices. Monthly revenue has tripled twice in the last six months without an increase in operational costs.
Other service providers such as SLTN, a systems integrator serving large and mid-sized businesses, have experienced similar cost savings by extending platform managed services to a cloud delivery model. By implementing a low-touch, highly scalable cloud as its core delivery platform across multiple compute and storage nodes, SLTN was able to deploy new services in seconds rather than hours. It was also able to utilize existing commodity skills without significant training, integrate the existing mixed environment and minimize operational administration and maintenance. The underlying IaaS cloud capabilities allowed SLTN to be more efficient and to provide the full spectrum of cloud services to their own customers in a pay-as-you-go model—with better service and at a lower price point.
The benefits that these companies experienced are evidence that high-scale provisioning and cloud management capabilities can dramatically increase service capacity. For service providers of all stripes—whether deliberate or not—these benefits are a critical part of the evolution of cloud services and offer a meaningful way to deliver more value to themselves and their users.
Learn more about provisioning and orchestration capabilities
that are helping service providers to meet their growing business needs.
IBM® Tivoli® Service Automation
Manager (TSAM) has delivered yet another cloud extension that provides service
offerings for automating the provisioning of network attached storage (NAS)
with an NFS export name. The file systems can then be mounted into virtual
machines provisioned within TSAM Virtual Servers Projects. The
extension introduces the concept of Storage-only Project, which
allows managing the entire life-cycle of the file systems (create, expand, set
access, and destroy), in a secure multi-tenant environment. It works in
integration with IBM N series and NetApp FAS series
storage systems as sketched in the picture below.
Once you download the installation
package from the Integrated Service Management Library (http://www.ibm.com/software/ismlibrary?NavCode=1TW10TS0F) and install it on top of TSAM 7.2.2
platform, your cloud administrator can easily configure the Extension for
Network Attached Storage to provision NFS-mountable file systems. In fact, the
extension provides a plug-in to the Cloud Storage Pool Administration
TSAM application where she can enter the hostname of the workstation running the
OnCommand NetApp management software, and the credentials to
access it. Then the extension automatically discovers all the storage resources (NetApp
Datasets) from the underlying storage systems and makes them visible as
TSAM Storage Pools. At that point the cloud administrator can regulate
access to the storage resources using the TSAM way of associating storage pools
and quotas to customers,
and that’s it, the extension is configured. Now you can delegate to your
customers the management of storage up to the assigned quota: the customer
administrators can start requesting storage for their virtual servers by
creating storage projects and add, expand, and delete file systems. The entry
point for this is the Tivoli Self Service Station – Storage Management folder
(showed in the picture below).
the Create Storage Project offerings brings a simple user interface for
requesting file systems and assigning them to teams of users (see example pictures
The customer administrator has to
enter a prefix for the NFS export name, a TSAM Storage Pool from which to carve
the storage, and the size of the file system, that’s it. She can decide to
create many file systems with same characteristics by increasing the value of
the “Number” spin control. She can decide to make the file systems available to
all the teams of the customer by checking the “Access to All Teams” box: by
default the storage is only visible to the team of users that owns the storage
Note that once the storage project
has been created, the file systems cannot be mounted yet into virtual servers because
there is no ACL set on the IBM N series boxes for them. To do so, the customer
administrator creates TSAM Projects with Virtual Servers, and associates file
systems to the virtual machines belonging to the project: the extension
automatically updates the access control list (ACL) of the NFS export name
adding the IP address of the virtual machines. When the user logs in, she can
mount the file systems and use them (she gets the information of the NFS export
name with a notification e-mail).
In summary, the predefined functions
that you get with the TSAM Extension for NAS storage are:
Service offerings for managing the entire life-cycle (create, expand,
destroy, set access) of shared file systems accessible with the NFS protocol;
Service offering for authorizing virtual servers to mount storage;
Administrative graphical user interface for discovering NetApp Datasets
into TSAM Storage Pools and restricting usage by customer.
There are no predefined features to
create and manage NetApp Datasets neither vFilers to create customers silos.
For example, what if you want to automate the creation of a vFiler and of a
couple of storage pools – gold and silver, upon on-boarding of a new customer?
There are no predefined features to authorize
the shared file systems to anything but a virtual server within virtual servers’
project. What if you want to automatically attach a file system to a VMWare Cluster
as backend data store for VM images upon creation in a storage project?
Well, the TSAM Extension for NAS
storage provides low-level Tivoli Provisioning Manager (TPM) Workflows and
Tivoli Platform Automation engine (TPAe) Runbooks that can be used to implement
such automations in custom extensions that you can write based on best
practices described in the TSAM platform extensibility guide.
The open beta program for the upcoming IBM SmartCloud Provisioning release started:
- Freely download the code, run it unattended in your premises without the need to sign a non-disclosure agreement
- Discuss what you think about that on a dedicated forum
- Watch demonstrations of IBM SmartCloud Provisioning capabilities in the work and tell us if you like or do not like the newest features just clicking a button.
- Join our community to get early access to and provide feedback on cloud provisioning and orchestration technologies
- Stay tuned to the community to hear the latest new on available code drops and functionalities
- Play with the product in our premises joining the hosted beta. To access the hosted beta, send an email to email@example.com
As businesses adopt cloud environments to control IT complexity, pool resources, and improve cost efficiencies, the TUAM development team have been engaged in evolving the usage and accounting capability in IBM Tivoli Usage and Accounting Manager
(TUAM) beyond traditional Enterprise charge-back.
In such a shared cloud environment the ability to accurately assess which IT resources and services are being utilized, how much they are being utilized, and by whom is fundamental if service providers are to justify the cost of the IT resource and expense.
The latest release of IBM Tivoli Usage and Accounting Manager, Version 7.3
, provides Cloud Cost Management
for those businesses needing to understand the new and dynamic usage of shared IT resources in Cloud and Virtualized environments, and seeking to bill or charge business units for their share of resource use including compute, storage, networks, energy, and personnel.
Read more about the new TUAM Cloud Cost Management Extension v1.0
for Tivoli Service Automation Manager (TSAM) in our blog update
IBM Tivoli Usage and Accounting Manager allows businesses to:
- Link their Cloud IT expenditures to business value delivered
- Accurately allocate cost across functions and departments/projects
- Understand true IT costs resulting in better IT investment decisions and get more out of their current investments
- Quantify the costs associated with services delivered including virtualized, cloud, storage area network (SAN), and service-oriented architecture (SOA) environments
- Interactively report and, if desired, bill or charge departments and functions accurately for their use of IT resources
Additionally the development team are working to supplement these core capabilities with new price tiering and invoice preview features for Cloud administrators and consumers. These features will be provided to TUAM users via the IBM Integrated Service Management Library
from October 2011.
Please contact our usage and accounting architect John Buckley (firstname.lastname@example.org) if you wish to understand or share your thoughts on the new Cloud use cases.
Modified on by rossella
IBM Cloud Orchestrator 2.5 comes with a set of interesting new features.
First of all the support for OpenStack Kilo; this opens the door to a set of very interesting scenarios related to software defined environments (think about the neutron capabilities in Kilo). Moreover you can now either leverage the OpenStack distribution provided by IBM (IBM Cloud Manager with OpenStack 4.3) or another OpenStack distribution based on Kilo. Orchestrating workloads on a non IBM OpenStack distribution does no longer rely on the Public Cloud Gateway.
The list of supported public clouds has been enriched with the addition of Microsoft Azure: from IBM Cloud Orchestrator self service user interface you can now register and manage Microsoft Azure regions, deployment artifacts and manage resources.
The pattern engine is now based on OpenStack Heat, no more proprietary technology involved; the user experience has been enhanced allowing to store and select heat templates from the self service UI.
The installation procedure has been simplified; the number of needed servers has been shrunk to reduce hardware requirements, a prerequisite checker has been added to enable a fast detection of possible failure points.
For further information visit IBM Cloud Orchestrator 2.5 knowledge center
IBM Cloud Orchestrator 2.5 announcement letter is here
I am pleased to announce that we have a new public forum for SmartCloud Orchestrator where users can discuss technical topics related to the product and address questions they might have.
SmartCloud Orchestrator has just been released. It is IBM's new private cloud offering based on OpenStack and other cloud standards.
You can read more about it in the Announcement Letter and we would be very happy to see you join the SmartCloud Orchestrator beta program.
This forum is a discussion platform and does not replace the IBM support. I hope you find this forum useful and it helps in the formation of an online user community.
Birgit Nuechter, Field Quality Manager for IBM SmartCloud Orchestrator
A usual adoption
pattern for cloud computing are desktops. It's really straight
forward because in general each company has standardized desktops:
only some specific version of the operating system are supported,
only specific flavours, only some applications are allowed and
typically everything is managed by the IT team.
If we think at the
benefits of adopting desktop cloud, some of them really jump
powerfully in front of the eyes: the IT team can really enforce
standardization (e.g. you can select as desktop only one of the
proposed flavours); the maintenance of the hw becomes far easier
given its consolidation; old, out-dated PCs can be used just as
connectors to the desktop hence gaining new life. From the desktop
user point of view he does no longer need to carry on some company
asset to work: healthier (no more heavy hw to take home or
travelling); safer (data is in the cloud).
But this is
nothing new, desktop cloud solution are already on the market, so
let's see if IBM SmartCloud Provisioning can bring additional
benefits to the desktop world.
What if we start
dealing with non-persistent desktop images?
images are the ones that disappear once you shut them down. You might
be asking yourself “well, that's not so clever, what about my data?
Are they lost?”. This is actually a very good point and this is the
keystone of the benefits coming with the adoption of non-persistent
The idea is that
all user data get stored into external (persistent) volumes that can
be attached/detached on demand to the non-persistent image.
If we now apply
this technology to the desktop world, it shed an interesting new
light on some typical and painful scenarios:
System or Software patching
the compliance of the desktops
changes in the amount of desktop users
In a traditional
infrastructure, when the operating system goes or is getting close to
go out of maintenance, a massive migration campaign starts: all
desktops need to be migrated. Now the migration statistically does
not go smoothly for all users and hence some of them will be stuck
even for days. If you use non-persistent images, you can easily
overcome this either creating a new master image with the new
operating system or upgrading a single instance of the image, do your
test campaign to make sure everything keeps working, then deploy it
in as many instances as the desktops you need to upgrade are, attach
to the new images the volumes with the user data and get rid of the
old images. If you leverage the incredible deployment speed of IBM
SmartCloud Provisioning, you'll have a brand new set of desktops in
Analogously we may
think about patching the operating system or a software running on
the desktop: they key idea behind this is that you're always going to
patch either the operating system or a specific software, never the
user data that keep living into separate volumes.
If we think at the
compliance aspect, remember that the user cannot save any change he
does on the boot disk of the image since nothing gets ever stored on
the disk. He is only empowered to write his own stuff on the
additional volumes. This should discourage him from even trying
installing new software or editing the operating system
configuration, since everything will be lost at the first shutdown.
I know in your
company you may have different configuration flavours of the same
operating system according to the department for which the desktop is
tailored. For example you may need to have different firewall
configurations according to the security level the end user is
entitled to. Well, with IBM SmartCloud Provisioning you can leverage
the User Data field at deployment time to specify these special
configurations. Of course this may even not be shown to the end user,
but you may mask it enlarging the list of the offering with the
specific configuration. Under the covers the instance is launched
with the proper parameters: no master image duplication, no manual
configuration; everything is automated and standardized.
optimizing resources? Desktops by their nature have all the same
operating system and configuration (at least for department),usually
they come also with the same applications installed on top. If you
deal with non-persistent images you are just saving lots of
duplicated, useless copies of the same operating systems and software
on the disk. Then, if you think that once the desktop is shut down,
its resources are released (i.e. cores and memory), you can better
optimize your hardware using those resources for other
applications/users (they may even be server application or desktops
for users residing in a different timezone).
coming on board? A project outsourced to an external work-force?
You may want to
have this people productive more than immediately. With IBM
SmartCloud Provisioning, their desktops will be up and running in
information about IBM SmartCloud Provisioning can be found in IBM
SmartCloud Provisioning WIKI and in IBM
SmartCloud Provisioning infocenter
See IBM SmartCloud Provisioning working
in a recorded
I've been impressed by the speed of
provisioning a set of virtual machines in just a few tens of seconds
using IBM Smart Cloud Provisioning. In most cases you can get a
running virtual machine in less than one minute.
The Smart Cloud Provisioning technology
has been devised and particularly optimized for managing the
following cloud infrastructure scenarios:
Infrastructure composed of
High level of standardization with
a relative small set of master images used to provision many
instances from the same image
Typical life cycle of the
provisioned resources with short average time of life of provisioned
Many other workloads can be deployed
and easily automated on top of Smart Cloud Provisioning. For example,
traditional stateful applications can be easily deployed for simple
HA solutions. Anyway you get the maximum performances from Smart
Cloud Provisioning when operating in the context of the above
To achieve such high performances Smart
Cloud Provisioning has been designed focusing the attention to an
optimized virtualization infrastructure based on OS streaming: no
need to copy large image files over the network when provisioning.
Image copying is the single biggest
bottleneck in VM provisioning today both in terms of CPU, memory, I/O
and bandwidth usage. In traditional Cloud provisioning approaches all
of this overhead is system resource that is just pure overhead
(nobody builds a Cloud to provision systems - provisioning is an
overhead that is required to have systems on which business workload
is deployed, and any overhead is in conflict with the business
The key element of such infrastructure
are the so called ephemeral instances, that are virtual machines
having no persistent state. Once they get terminated all the data
associated with them is deleted as well. They are clones of a master
image and these clones will have a primary virtual disk which is
ephemeral: when the instance goes, so does its ephemeral storage
(mechanisms exist in Smart Cloud Provisioning to provide persistence,
if needed by some scenarios).
When creating a new instance, since
master images are read-only resources and are replicated across the
storage cluster, Smart Cloud Provisioning uses the Copy-on-Write
(CoW) technology and the iSCSI protocol to stream them avoiding
expensive copying. Each iSCSI session results in a valid block device
to be created in the host OS. Of course each guest OS (corresponding
to a given instance) requires a writable block device representing
the main disk of the system. All supported hypervisors have a storage
virtualization layer which includes the Copy-on-Write technology. For
example, KVM's qcow2 files can be configured to implement CoW
by referencing a backing storage device. VMWare has something called
redo files which effectively do the same thing as well. In each case,
the hypervisor can natively use the CoW file referencing the iSCSI
block device to expose a virtual block device to the virtual machine. Depending on the hypervisor and guest
OS this device will show up as something like /dev/sda or c:\. The CoW files are stored locally on the
hypervisor's file system. When the instance is terminated, the
Smart Cloud Provisioning agent will simply discard the CoW file and
check if any other instances are using the same iSCSI device. If the
device is no longer in use, the agent will also tear down the iSCSI
Thanks to the above infrastructure the
action of provisioning a new virtual machine results in a very fast
and reliable process that allows to create individual systems in tens
of seconds and of peak requests of 1000s of systems per hour.
If you're interested in trying the
Smart Cloud Provisioning product, you can download a trial version
from the following link:
DevOps has become something of a buzzword lately but the idea behind it can be truly powerful. Using a combination of technology and best practices to increase collaboration between development and operations teams can accelerate the application development lifecycle while improving software quality and reducing costs.
For many, the development process has become more complex and segregated from operations. Factors such as inefficient communications, manual processes and poor visibility into the deployment process result in production bottlenecks as well as subpar quality throughout the development and delivery cycle.
To address these challenges, organizations have often turned to adhoc and siloed efforts. And so gaps still exist due to lack of integration across people, processes and tools. The reality is that an effective DevOps solution requires an integrated approach of continuous delivery that optimizes and accelerates the application lifecycle in every phase: development, testing, staging and production.
What this means is that changes made in development are continuously built, integrated and tested for function, performance, systems verifications, user acceptance, and then staged, ready for production. And it can all be brought together through an integration framework that can automate the individual tasks across the various stages of the pipeline and continuously deliver changes, providing end-to-end lifecycle management. Continuous automation is necessary in the following key areas:
• Continuous integration
provides faster validation and delivery of code changes via automated, repeatable execution of build processes with continuous feedback
• Continuous deployment
provides on-demand environment configuration and the ability to continuously deploy code and configuration middleware.
• Continuous testing
automates testing in production-like environments.
• Continuous monitoring
increases visibility into application performance and provides data to trace and isolate product defects.
With an automated process for moving application changes through progressively richer test environments that mirror the production environment, chances for error and roll back are greatly reduced.
The result is increased visibility into the delivery pipeline, standardized communication between Dev and Ops and more efficient and accurate delivery of software projects. And the delivery process can scale dynamically as business needs grow.
Here’s how IBM is addressing DevOps, with the launch of SmartCloud Continuous Delivery
--an agile, scalable and flexible solution for end-to-end lifecycle management that allows organizations to reduce software delivery cycle times and improve quality. SmartCloud Continuous Delivery is also available on Jazz.net
liked the post “Rapid deployments with IBM
SmartCloud Provisioning” that explains
how simple and fast it is to deploy instances using SmartCloud Provisioning.
But after the instances are deployed the next question is:
- How can I "easily" monitor the
performance and availability of the OS and applications of launched
solution is to integrate IBM SmartCloud Provisioning with IBM
Tivoli Monitoring (ITM) so that all the running instances will be
connected to the ITM Server and managed according the performance expectations
It can be
achieved by exploiting the current integration between IBM
SmartCloud Provisioning and the Image Construction and Composition Tool
(ICCT), available in IBM SmartCloud
Provisioning version 1.2, and performing the following steps:
- Download from the Integrated Service
Management Library (ISML) website the bundle “ICCT Bundle to deploy IBM
Tivoli Monitoring Agent for Linux OS v6.X”
- Extend an OS base image available in IBM SmartCloud Provisioning, adding the above
In this way, a new image will be
available in IBM SmartCloud
Provisioning with the ITM Agent installed and configured to connect to the IBM
Tivoli Enterprise Monitoring Server
that, when the extended image is launched, the ITM agent will automatically
start and connect to the ITM Server without requiring any user action.
Then, from the ITM console you will be able to see and monitor it and take
actions to address performance issues
If you are
interested in other extensions available on ISML this is a list of available
bundles you can download and use to extend a base image (search in ISML for “ICCT”):
- ICCT Bundle to deploy IBM Tivoli Directory
- ICCT Bundle to deploy IBM HTTP Server 7.0
- ICCT bundle to deploy IBM WAS Update
- ICCT Bundle to deploy Apache Web Server
- ICCT Bundle to deploy IBM DB2 Server 9.7
- ICCT Bundle to deploy IBM Tivoli
Monitoring Agent for WAS
- ICCT Bundle for IBM WebSphere MQ 188.8.131.52
- ICCT Bundle to deploy IBM WebSphere
Application Server Network Deployment 8.0
- ICCT Bundle to deploy IBM WebSphere
Application Server Community Edition 3.0
- ICCT Bundle to deploy IBM WebSphere MQ
- ICCT Bundle to deploy IBM
Tivoli Monitoring Agent for DB2
- ICCT Bundle to deploy IBM
Tivoli Endpoint Manager Agent
Software bundle for IBM License Metric Tool Agent for System X platform