I am pleased to announce that we have a new public forum for SmartCloud Orchestrator where users can discuss technical topics related to the product and address questions they might have.
SmartCloud Orchestrator has just been released. It is IBM's new private cloud offering based on OpenStack and other cloud standards.
You can read more about it in the Announcement Letter and we would be very happy to see you join the SmartCloud Orchestrator beta program.
This forum is a discussion platform and does not replace the IBM support. I hope you find this forum useful and it helps in the formation of an online user community.
Birgit Nuechter, Field Quality Manager for IBM SmartCloud Orchestrator
Are you a cloud addicted?
Do you want to be always updated on the latest technologies?
Do you want to contribute to development decisions in the IBM SmartCloud Provisioning world?
Are just curios to see what's cooking in IBM SmartCloud Provisioning development?
If you answered "yes", have a look at the Upcoming Feature
page in the IBM SmartCloud Provisioning WIKI
It starts getting populated with recorded demos and screenshots showing what is currently into the developers'mind.
Each page is equipped with a set of buttons to immediately and easily provide your feedback directly to the development team.
More ideas, thoughts, videos will be added shortly on the same page, so stay tuned!
...Just do not get lazy, click the links!!!
DevOps has become something of a buzzword lately but the idea behind it can be truly powerful. Using a combination of technology and best practices to increase collaboration between development and operations teams can accelerate the application development lifecycle while improving software quality and reducing costs.
For many, the development process has become more complex and segregated from operations. Factors such as inefficient communications, manual processes and poor visibility into the deployment process result in production bottlenecks as well as subpar quality throughout the development and delivery cycle.
To address these challenges, organizations have often turned to adhoc and siloed efforts. And so gaps still exist due to lack of integration across people, processes and tools. The reality is that an effective DevOps solution requires an integrated approach of continuous delivery that optimizes and accelerates the application lifecycle in every phase: development, testing, staging and production.
What this means is that changes made in development are continuously built, integrated and tested for function, performance, systems verifications, user acceptance, and then staged, ready for production. And it can all be brought together through an integration framework that can automate the individual tasks across the various stages of the pipeline and continuously deliver changes, providing end-to-end lifecycle management. Continuous automation is necessary in the following key areas:
• Continuous integration
provides faster validation and delivery of code changes via automated, repeatable execution of build processes with continuous feedback
• Continuous deployment
provides on-demand environment configuration and the ability to continuously deploy code and configuration middleware.
• Continuous testing
automates testing in production-like environments.
• Continuous monitoring
increases visibility into application performance and provides data to trace and isolate product defects.
With an automated process for moving application changes through progressively richer test environments that mirror the production environment, chances for error and roll back are greatly reduced.
The result is increased visibility into the delivery pipeline, standardized communication between Dev and Ops and more efficient and accurate delivery of software projects. And the delivery process can scale dynamically as business needs grow.
Here’s how IBM is addressing DevOps, with the launch of SmartCloud Continuous Delivery
--an agile, scalable and flexible solution for end-to-end lifecycle management that allows organizations to reduce software delivery cycle times and improve quality. SmartCloud Continuous Delivery is also available on Jazz.net
Was out for a few drinks last week with friends from the
local tech community where we try to solve the world’s problems. We were enjoying sitting in the sunshine and
warm weather on the patio of a local brewery discussing the interesting topics
of the week. Before we got into the
presidential election the talk turned to virtualization, of course, and the
current directions their companies are taking.
Everyone has heard about the pricing actions from VMware causing major
concerns from their customers, but these topics to me always seem some what
abstract until you hear about people taking action. Well, it turns out the guy who works at a
large tech company in Austin
was actually in the process of installing KVM to replace VMware in his area of
the company due to pricing issues. He
went on to say that a CIO friend of his at another company was in the process
of doing the same thing. That’s the
thing about IT people, once you upset them they can take decisive action to fix
the problem and they tend to have very long memories. I also came across this CNET
article sub-titled “Market research firm IDC says that data from a new survey
shows that "open cloud is key for 72 percent of customers." Clearly there seems to be a pricing level
that makes open cloud solutions look very attractive. As more companies try to balance the need for
virtualization and cloud with the costs of the solutions, I think the open
cloud will become more attractive.
IBM is working with OpenStack which is a global collaboration
of developers and cloud computing technologists producing the ubiquitous open
source cloud computing platform for public and private clouds. The OpenStack Summit, which is sold out, is
going on this week (October 15th) in San Diego, you can read about it and other
topics at the OpenStack blog. There are also some great insights on the
topic at the Linux Foundation blog from IBM’s
Angel Diaz: 3 Projects creating user-driven standards for the open cloud . I think this is definitely a space worth
watching…stay tuned for more updates from the patio…
Modified on by cynthyap
Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.
But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”
The need to orchestrate really becomes clear when various aspects of cloud management are brought together. The value to the organization at this stage of cloud is simplifying the management of automation – otherwise a balancing act to manage multiple hypervisors, resource usage, availability, scalability, performance and more -- based on business needs from the cloud, with the ultimate goal of delivering services faster.
With orchestration, the pieces are woven together and can be managed more effectively to ensure smooth and rapid service delivery -- and delivered in a user-friendly catalog of services easily accessible through a single pane of glass. In essence, cloud orchestration = automation + integration + best practices.
Without cloud orchestration, it’s difficult to realize the full benefits of cloud computing. The stitching together of best practices and automated tasks and processes becomes essential to optimize a wide spectrum of workloads types.
In addition to rapid service delivery, the benefit of orchestration is that there can be significant cost savings associated with labor and resources by eliminating manual intervention and management of varied IT resources or services.
Some key traits of cloud orchestration include:
• Integration of cloud capabilities across heterogeneous environments and infrastructures to simplify, automate and optimize service deployment
• Self-service portal for selection of cloud services, including storage and networking, from a predefined menu of offerings
• Reduced need for intervention to allow lower ratio of administrators to physical and virtual servers
• Automated high-scale provisioning and de-provisioning of resources with policy-based tools to manage virtual machine sprawl by reclaiming resources automatically
• Ability to integrate workflows and approval chains across technology silos to improve collaboration and reduce delays
• Real-time monitoring of physical and virtual cloud resources, as well as usage and accounting chargeback capabilities to track and optimize system usage
• Prepackaged automation templates and workflows for most common resource types to ease adoption of best practices and minimize transition time
In short, many of the capabilities that we associate with cloud computing are really elements of orchestration. In an orchestrated environment, organizations gain tools to manage their cloud workloads through a single interface, providing greater efficiency, control and scalability. As cloud environments become more complex and organizations seek greater benefit from their computing resources, the need for sophisticated management solutions that can orchestrate across the entire environment will become ever clearer.
Learn more about how cloud orchestration capabilities can help your business. And join the Cloud Provisioning and Orchestration development community to test out the latest cloud solutions and provide feedback to impact development.
With the proliferation of cloud computing, many businesses are starting to adopt a service provider model—either as a deliberate strategy to establish new revenue streams or, in some cases, inadvertently to support the growing needs of their organizations. This is especially true for companies with diverse needs, whether they’re tech companies with dev teams churning out new apps and services, or business owners driving requirements for SaaS services and cloud capabilities to enhance their data center operations.
In any event, the distinction between managed service providers (MSP) or cloud service providers (CSP), and companies growing in-house capabilities may not be as important as the common need to respond quickly and scale to support customer needs. The challenges facing all of these companies include facilitating the creation of new applications and services while maintaining quality of service, and the need for automation to reduce human resources and error from manual tasks—all with an eye to drive revenue and acquire new customers.
And so, the challenge for service providers of any kind is to increase scalability, automation and uptime while constraining costs. Companies are increasingly solving the critical piece of this puzzle by embracing rapid, high-scale provisioning and key cloud management capabilities to allow them to grow as quickly as their customers’ needs. In particular, the benefits accrue in four key areas.
First, applications can be deployed rapidly across private and public cloud resources.
Second, rich image management tools simplify complex and time consuming processes for creating virtual images and constraining image sprawl.
Third, operational costs can be lowered by leveraging existing hardware to support an array of virtual servers and diverse hypervisors.
And fourth, high-scale provisioning enables rapid response to changing business needs with near-instant deployment of hundreds of virtual machines.
While the spectrum of virtualization to orchestration functionality helps to manage their environments, high-scale provisioning in particular offers a cost-effective way to leverage capacity as a business commodity—a way for service providers to offer seemingly limitless capacity to their customers while lowering the relative costs of providing it.
In the case of Dutch Cloud, a CSP based in the Netherlands, a growing client base allowed the company to expand but it was very conscious of the costs and issues related to scalability, performance and security. By adopting a lightweight, high-scale provisioning solution for core service delivery, Dutch Cloud added capacity easily and was able to scale up rapidly without interruption to customer service. The CSP also reduced its administrative workload by 70 percent by adopting automation best practices. Monthly revenue has tripled twice in the last six months without an increase in operational costs.
Other service providers such as SLTN, a systems integrator serving large and mid-sized businesses, have experienced similar cost savings by extending platform managed services to a cloud delivery model. By implementing a low-touch, highly scalable cloud as its core delivery platform across multiple compute and storage nodes, SLTN was able to deploy new services in seconds rather than hours. It was also able to utilize existing commodity skills without significant training, integrate the existing mixed environment and minimize operational administration and maintenance. The underlying IaaS cloud capabilities allowed SLTN to be more efficient and to provide the full spectrum of cloud services to their own customers in a pay-as-you-go model—with better service and at a lower price point.
The benefits that these companies experienced are evidence that high-scale provisioning and cloud management capabilities can dramatically increase service capacity. For service providers of all stripes—whether deliberate or not—these benefits are a critical part of the evolution of cloud services and offer a meaningful way to deliver more value to themselves and their users.
Learn more about provisioning and orchestration capabilities
that are helping service providers to meet their growing business needs.
Service Health for IBM SmartCloud Provisioning has officially GA'ed and is now available on IBM Integrated Service Management Library ( ISML ).
Service Health provides pre-built integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring utilizing a custom agent, OS agents, and the ITMfVE agents. A product provided navigator offers a concise overview on the health of the IBM SmartCloud Provisioning infrastructure enabling the ability to identify and react to issues in your environment quickly minimizing the impact, such as an unresponsive compute node, high disk usage on storage nodes or key kernel services not responding. It also provides visibility into the KVM and ESXi hyper-visors.
This solution can be downloaded from the IBM Integrated Service Management Library( ISML ) following this link -> Service Health for IBM SmartCloud Provisioning
A presentation and demo session for IBM SmartCloud Provisioning held on thursday, April 26th at 3:00 PM central european time (CET)
The presentation will be around architectural changes in IBM SmartCloud Provisioning.
The demo will be about registering High Scale Low Touch as cloud group in IBM SmartCloud Provisioning.
If you would like to join, using your web browser, connect to http://www.ibm.com/collaboration/meeting/join?id=LMT71CA
No password is required
In a dynamic cloud environment standard concepts like IP addresses and storage volumes assume a special meaning when it comes to reserving and using them regardless of the virtual machines owned by a cloud user.
The concept of Elastic IP (EIP) and Elastic Block Storage (EBS) was initially introduced by Amazon EC2 as a way to decouple the resources assigned to a cloud user from their utilization. In other words, as a cloud user you can reserve an elastic resource and assign it to one of the VMs you own, but you can also re-assign it to a different VM whenever you need (for example, whenever you need to replace your VM with a new one).
SmartCloud Provisioning offers similar capabilities exposing the concepts of Static Addresses and Persistent Volumes that can be reserved and assigned to any running VMs.
A SmartCloud Provisioning address is a statically defined address which can be dynamically bound to any instance in the cloud. In other words, a static IP address is associated with your account, not with a particular instance, and you control that address until you choose to explicitly release it.
Let’s examine more in details how it works.
When SmartCloud Provisioning creates a VM, it assigns a dynamic IP address to it, on a default management sub-network. From this point on, the system always refers to the VM using the dynamic address assigned at boot time. Nonetheless, SmartCloud Provisioning offers to cloud users the possibility of assigning a different IP address, which can be seen as a reserved and static IP.
In order to achieve this result, a centralized pool of addresses is registered by the cloud administrator and stored in a durable data service. A cloud user can then reserve one or more addresses from this pool, and can associate one of them to a specific VM he owns. Note that the cloud user does not have any clue about which address will be reserved for him; he does not even know upfront if there is any static IP address left, until he sends the reservation request.
Once a static IP has been reserved and assigned to a VM, SmartCloud Provisioning internally creates a mapping between the default dynamic address associated to the selected VM and the reserved IP address. This translates into NAT rules on the host OS's iptables to forward all traffic to the private address of that VM.
In this way you can always refer to your VM using the static address, and even if you decide to re-create the VM, you can reassign that same address to the new VM.
The address remains in your reserved list as long as you need it, and you can release it when you no longer need it.
Persistent storage is critical to any non-trivial production application. Just as Amazon's EBS has proven to be extremely valuable, SmartCloud Provisioning persistent volumes are equally powerful, offering an off-instance storage that persists independently from the life of an instance. Users can create arbitrary numbers of arbitrarily sized persistent volumes. The volumes can be dynamically attached to any VM on the cloud as long as only one instance is attached at any time.
Once attached, a persistent volume appears to the guest OS like any other raw, unformatted block device.
Each persistent volume is assigned a UUID, which can be leveraged by the cloud user to track them.
RAID sets can be easily created together to ensure each volume is hosed on a separate physical host/device.
Multiple block devices will then be exposed to the guest OS which can establish their own raided meta-devices using tools like mdadm.
Behind the scenes, these block devices are very similar to the primary boot disk of a non persistent VM. However, these are read-write iSCSI devices and directly attached to the instance without leveraging Copy-on-Write. Note persistent block storage is also hosted on the same storage cluster used for master images.
Similarly to the static IP addresses, the persistent volumes are associated with your account, not with a particular instance, and you control them until you choose to explicitly delete them.
The persistent volumes allow you to keep your data separate from the OS, offering you the possibility to move them from a VM to another whenever you need. Moreover, they offer a valid mechanism to keep your data safe when dealing with VMs that do not have a dedicated persistent storage (the non-persistent VMs managed by SmartCloud Provisioning).
If you're interested in trying the SmartCloud Provisioning product, you can download a trial version from the following link:
We're pleased to make available as a beta Service Health for IBM SmartCloud Provisioning. As this is a beta we welcome any and all feedback.
Service Health (Beta) for IBM® SmartCloud Provisioning provides prebuilt integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring. This solution allows you to easily monitor your IBM SmartCloud Provisioning infrastructure to identify and react to issues in your environment.
This solution is available via the IBM Integrated Service Management Library( ISML ). You can find it here -> Service Health for IBM SmartCloud Provisioning
. Please use the "Comment or Review" link on that page to post feedback. You may also use the "Contact Provider" link as well.
SmartCloud Provisioning is designed to minimize the use of a centralized “command and control” approach, in favor of scale out management, where endpoints can participate in management activities and do not depend on a single configuration management database.
This allows SmartCloud Provisioning to handle multiple provisioning tasks in parallel, across an unlimited number of servers.
Cloud users can request deployments of virtual machines and have access to the provisioned systems in very few seconds, thanks to the parallel and distributed processing that happens transparently and under the covers.
Let’s drill down into the details about this distributed management approach.
SmartCloud Provisioning internally uses a peer to peer (P2P) messaging infrastructure to pass provisioning and management messages between agents, which contribute to the decentralized control.
Agents are installed on the compute nodes (i.e. the hypervisors) as well as on the storage nodes, where images and volumes reside.
The P2P connections between agents not only allow self-monitoring of their health in order to implement a low-touch management infrastructure, but also allow orchestrating the communications to achieve an effective load distribution and decentralized management of the requests performed by cloud users.
The P2P communication overlay is backed by a distributed lock service, which is based on ZooKeeper.
ZooKeeper is a distributed, open-source coordination service for distributed applications, which exposes a simple set of primitives that distributed applications can build upon to implement higher level services for synchronization, configuration maintenance, and groups and naming. It is designed to be easy to program, and uses a data model styled after the familiar directory tree structure of file systems.
Like the distributed processes it coordinates, ZooKeeper itself is intended to be replicated over a set of servers that must all know about each other. They maintain an in-memory image of state, along with a transaction logs and snapshots in a persistent store.
SmartCloud Provisioning agents connect to a single ZooKeeper server. Each agent maintains a TCP connection with the Zookeeper server, through which it sends requests, gets responses, gets watch events, and sends heart beats. If the TCP connection to the server breaks, the agent will connect to a different server.
When a deployment request is received by SmartCloud Provisioning, the request is processed by the Web Services layer, passed to the management infrastructure, and managed by the agents and the ZooKeeper services.
The following steps describe in more details the internal communications, as depicted in figure 1 below.
This processing happens in a transparent way for the end user, who just sees the deployment request being served in few seconds.
- The Web Services layer takes a deployment request in charge (e.g. deploy 50 “Large” instances of image “LOB123-RHEL 6.0”), and triggers a first interaction with the ZooKeeper server to ask which agent in the compute nodes layer can take this request into account.
- The ZooKeeper server selects one of the available leaders in the compute nodes layer and returns this information back to the web service layer. The role of the selected leader will be to initiate an internal hand-shaking among the compute nodes agents to process the incoming request.
- The Web Service layer receives the information about which agent to contact, and opens a connection to that agent, passing the deployment request details.
- The selected agent takes care of the request and starts a “discussion” phase with all the other leaders (one for each rack) in order to distribute the load of the incoming request among all the agents that could provide resources to fulfill it. This happens leveraging the P2P connection between agents.
- Inside each rack, the leader triggers a parallel P2P interaction with all the agents on all the compute nodes included in that rack, to understand which agent can serve a portion of the incoming request. Each agent having enough free resources to serve “Large” instances answers the request coming from its leader, so that, at the end of the hand-shaking process each leader knows which portion of the incoming request can be processed by which agent.
- At this point, each involved agent knows which part of the incoming request it is supposed to process. To start the real deployment step, the agent asks the ZooKeeper server where to find the “LOB123-RHEL 6.0” image to be deployed, according to the incoming request. The ZooKeeper again answers the incoming requests by providing one of the available agent leaders on the storage nodes layer.
- When an agent receives back the information about which storage node to connect to, it opens a P2P connection with the related agent and asks for the image it needs to fulfill the deployment request.
- The storage node agent leader starts in turn a P2P communication with the other leaders asking for the selected image. Each leader inside its managed rack triggers other P2P connections to ask each managed agent if it has a copy of the requested image.
- The storage leader initiating the request collects back all the details about agents having a copy of the requested image and selects at least two of them (default redundancy required by SmartCloud Provisioning), returning the information to the calling compute-node agent. The compute-node agent at this point can access the image and starts the deployment of VMs, according to its capacity and to the amount of work it offered to serve.
As I said, this processing happens under the covers in a very fast way and the user does not have to worry about any of the steps above.
This allows reaching high levels of parallelism, decentralized management, as well as scale-out capabilities that can be easily reached by increasing the number of servers.
If you're interested in trying the SmartCloud Provisioning distributed management capabilities, you can download a trial version from the following link:
Cloud systems have made a huge
improvement in terms of tracking and performance. In “Rapid deployments with
IBM Smart Cloud Provisioning” blog, we have shown that virtual machines or
appliances can be started and configured in a matter of seconds. It has never
been so easy to create a virtual machine (VM), install software, and configure
middleware. However, with great power comes great responsibility… it is now
possible to create a VM, but what is its lifecycle? Will it be destroyed after
being used, is the starting image deprecated, or is there a better starting
image given the needed configuration and software install requirements?
IBM SmartCloud Provisioning provides
a component called IBM Virtual Image Library (also known as IVIL) to solve
common issues that arise in large scale virtualized environments:
tracking: Where are my images? How old are they? How are they related?
and security: What is in the images? Are they secure? What is the software
Are there images redundancies? Is
there any difference between two images?
list goes on
VIL can be integrated simply into
your virtualization infrastructure; the only requirement to start using IVIL is
the credentials required to contact the virtualization infrastructure. No changes to your current virtualization
environment are required. After credentials
are provided, IVIL can automatically determine the provenance, state, and the
content of each virtual image or virtual machine in the virtualization
environment. After the environment is
registered you will have a clear picture
of your various images, their content, history, and similarity with one
another. More important, as soon as IVIL
is used in the infrastructure, it can be used to move the images from one
hypervisor vendor to another and keep track of these migrations. To summarize, IVIL
not only keeps track of the changes of an image on one hypervisor but continues
when images are in a heterogeneous environment.
A common solution to track the
contents and versioning of images is by use of a naming convention, for
example, a name such as RHEL_6.1_WebSphere7.1_v2.1 implies the image is Red Hat
Linux 6.1 with WebSphere 7.1 installed, and that this is version 2.1 of this
image. It is feasible to use this approach with a small number of images but
becomes cumbersome and confusing with anything but small examples. Basic information that is typically attempted
to be conveyed includes:
is the OS and OS version?
applications are installed and their versions?
the latest patches and updates installed?
does this image relate to other versions of the same or similar images?
Using an image naming convention
can work in some cases and provide some of the needed information but it does
not scale beyond a small number of simple images. To solve this, IVIL provides versioning and
provenance control to understand where an image comes from:
What is provenance? Simply put provenance tracks the history of the
image as it has evolved over time in the virtual environment. It tracks how the bits that make up the image
came to be – through IVIL checkout operations, image clone operations, image
copy operations, and so on. It is used
to understand the lineage of an image from the perspective of the virtual
system which might or might not match with how the user of IVIL views the
For example, let’s assume that you
have an image called “A”. If you decide to start this image on multiple
instances of IBM SmartCloud Provisioning or if you decide to clone this image
possibly multiple times, then IVIL will keep track of the relation between all
the created images and instances. At any time, if a security flaw is found on
A, then you can infer that the associated images and instances are likely affected
also. IVIL provides this functionality not only for a single virtual
environment, but across heterogeneous virtual environments also.
What is versioning? Versioning is the logical user-defined
lineage of an image or virtual appliance; it is the way a user would think of
versioning his or her image functionality, for example this is version 2 of my
AccountsPayableService virtual image.
When an image is available with a particular application version, the OS
and libraries behind are often not important, only the application is. Is it
important to know its template? Not necessarily, only the information about the
OS is relevant. However, it is good to know the application version and if
there is a newer version available for this image or if a new image has been
released with the latest security patches. This is the versioning system in IVIL;
it helps to understand if there are other versions of the application in the
infrastructure, if some applications contain a patch or not.
To summarize, provenance is
oriented to infrastructure administration whereas versioning is more oriented
towards applications and workloads.
For example, let’s assume that we
want to provide version 1.0 of software S as image. By default, users can
decide to use software S and trigger any instance of image A. At a certain
point, the version 1.0 is deprecated and we must upgrade software S to version
1.1. Unfortunately, the OS distribution must be upgraded. A solution is to
reinstall the OS from scratch and install S version 1.1 on it; this new image
will be called B. These images do not have any common lineage from a provenance
perspective, however the content has a logical lineage to the user. Image A is
the parent of image B from a versioning perspective.
It is important to understand that
an image can have only one provenance parent but can have multiple version
parents. The second claim makes sense because an image may have multiple
applications installed and thus each one may be associated to a logical
This concludes the introduction of
Virtual Image Library component in IBM SmartCloud Provisioning. Next time, I
will introduce the concept of similarity between images and the power that it
provides in terms of debugging, infrastructure consolidation, licensing cost, and
I've been impressed by the speed of
provisioning a set of virtual machines in just a few tens of seconds
using IBM Smart Cloud Provisioning. In most cases you can get a
running virtual machine in less than one minute.
The Smart Cloud Provisioning technology
has been devised and particularly optimized for managing the
following cloud infrastructure scenarios:
Infrastructure composed of
High level of standardization with
a relative small set of master images used to provision many
instances from the same image
Typical life cycle of the
provisioned resources with short average time of life of provisioned
Many other workloads can be deployed
and easily automated on top of Smart Cloud Provisioning. For example,
traditional stateful applications can be easily deployed for simple
HA solutions. Anyway you get the maximum performances from Smart
Cloud Provisioning when operating in the context of the above
To achieve such high performances Smart
Cloud Provisioning has been designed focusing the attention to an
optimized virtualization infrastructure based on OS streaming: no
need to copy large image files over the network when provisioning.
Image copying is the single biggest
bottleneck in VM provisioning today both in terms of CPU, memory, I/O
and bandwidth usage. In traditional Cloud provisioning approaches all
of this overhead is system resource that is just pure overhead
(nobody builds a Cloud to provision systems - provisioning is an
overhead that is required to have systems on which business workload
is deployed, and any overhead is in conflict with the business
The key element of such infrastructure
are the so called ephemeral instances, that are virtual machines
having no persistent state. Once they get terminated all the data
associated with them is deleted as well. They are clones of a master
image and these clones will have a primary virtual disk which is
ephemeral: when the instance goes, so does its ephemeral storage
(mechanisms exist in Smart Cloud Provisioning to provide persistence,
if needed by some scenarios).
When creating a new instance, since
master images are read-only resources and are replicated across the
storage cluster, Smart Cloud Provisioning uses the Copy-on-Write
(CoW) technology and the iSCSI protocol to stream them avoiding
expensive copying. Each iSCSI session results in a valid block device
to be created in the host OS. Of course each guest OS (corresponding
to a given instance) requires a writable block device representing
the main disk of the system. All supported hypervisors have a storage
virtualization layer which includes the Copy-on-Write technology. For
example, KVM's qcow2 files can be configured to implement CoW
by referencing a backing storage device. VMWare has something called
redo files which effectively do the same thing as well. In each case,
the hypervisor can natively use the CoW file referencing the iSCSI
block device to expose a virtual block device to the virtual machine. Depending on the hypervisor and guest
OS this device will show up as something like /dev/sda or c:\. The CoW files are stored locally on the
hypervisor's file system. When the instance is terminated, the
Smart Cloud Provisioning agent will simply discard the CoW file and
check if any other instances are using the same iSCSI device. If the
device is no longer in use, the agent will also tear down the iSCSI
Thanks to the above infrastructure the
action of provisioning a new virtual machine results in a very fast
and reliable process that allows to create individual systems in tens
of seconds and of peak requests of 1000s of systems per hour.
If you're interested in trying the
Smart Cloud Provisioning product, you can download a trial version
from the following link:
With the barrage of cloud news constantly hitting the market, it can be challenging for organizations to differentiate between all of the solutions and capabilities out there.
But with the latest cloud offering from IBM, the value proposition is quite simple—you get a low-cost, low-risk entry to cloud computing with compelling features. This is especially important for organizations who are still trying to leverage the cost savings of virtualization.
Our customers have told us they’re looking to cloud computing to increase agility—the ability of IT to evolve and meet business needs—and they’re looking for ways to control expenses related to IT investments. They also want to reduce IT complexity while at the same time increase utilization, reliability and scalability of IT resources. And they are looking for the ability to expand capabilities gradually, as their needs change and grow.
In designing a solution to meet all of these needs, we developed IBM SmartCloud Provisioning. Using industry best practices for cloud deployment and management, this new solution allows organizations to quickly deploy cloud resources with automated provisioning, parallel scalability and integrated fault tolerance to increase operational efficiency and respond to user needs.
The name doesn’t tell the whole story though. IBM SmartCloud Provisioning is a full-featured solution wrapped up in an easy-to-implement package. That means you get:
- Rapidly scalable deployment designed to meet business growth
- Reliable, non-stop cloud capable of automatically tolerating and recovering from software and hardware failures
- Reduced complexity through ease of use and improve time to value
- Reduced IT labor resources with self-service requesting and highly automated operations
- Control over image sprawl and reduced business risk through rich analytics, image versioning and federated image library features
Using this technology, we’ve seen customers get a cloud up and running in just hours—realizing immediate time to value. It’s fast—administrators have been able to go from bare metal to ready-for-work in under five minutes, or start a single VM and load OS in under 10 seconds, or scale up to 50,000 VMs in an hour (50 nodes).
But ultimately, these IT benefits have translated to business benefits—customers have been able to see how cloud computing can impact their business, and how they can accelerate the delivery of new services to drive revenue.
With the new release of IBM SmartCloud Provisioning this week, you can try and see firsthand the potential of this breakthrough technology to accelerate your journey to cloud. And if you want a preview of what’s in development, you can join our Open Beta program for access to beta-level code.
A very interesting Cloud Computing case study of the Capgemini Infrastructure as a Service delivery platform project has been recently published on the Web:
The case study shows how one of the world’s leading infrastructure outsourcing providers has seen the business opportunity of offering to its clients a cloud-based solution that combines the benefits of a high-value infrastructure service provider with the cost advantages of Cloud computing. Capgemini focused the new cloud based services on delivering to their clients Infrastructure as a Service capabilities with much higher flexibility and substantial cost-efficiency.
In partnership with IBM, Capgemini built a fully integrated cloud delivery platform for clients in the UK and USA leveraging the Tivoli Service Delivery Manager solution that includes the IBM Tivoli Service Automation Manager, Tivoli Monitoring and Tivoli Usage Accounting Manager products. On top of the IBM hardware BladeCenter HS22V and XIV Storage System technologies.
The key aspects of the solution built by Capgemini has been:
- Implementation of a resilient and scalable global infrastructure with capability of managing resource pools in different regions and with a modular design for quick scale out
- Single solution that enables to manage a wide range of platforms and architectures that does not tie to any specific hardware technology or vendor. Ability to choose the right hypervisor and guest OS platforms for the right workload
- Multi-Customer shared infrastructure providing secure network separation between customer environments
- Automation of network management and configuration that enables to support multiple network domains per customer and linkage to the customers private networks
- Extensible service catalog to fit the needs of the Capgemini customers
- Ability to quickly on-board existing Capgemini customer workloads.