Would you like to show and charge
for usage of your IBM Power Systems server?
You may already be aware of the concept
of a virtualized system and virtual machines. This might be used by your organization as a means to share physical resources or form the basis for your cloud infrastructure. The usual goal of virtualization is to
centralize administrative tasks while improving scalability and work
loads. The question is how do you analyze the usage of such resources
and charge appropriately where required?
The Tivoli Usage and Accounting Manager
(TUAM) team is pleased to announce that the TUAM IBM Hardware
Management Console (HMC) collector also supports collecting usage
information from IBM Systems Director Management Console (SDMC) and facilitates analyzing, reporting, and billing based on the
usage and costs of this metering data. This provides a
means for enterprises to migrate from HMC to SDMC and ensure
continuity of showback/chargeback solutions based on TUAM. Future versions of the HMC/SDMC
collector will exploit SDMC specific features.
Capabilities of the collector include
- Ability to capture allocation (entitlements) and usage information for each LPAR, Processor Pool, Memory Pool and the overall System
- Ability to capture capped and uncapped usage and charge different amounts for each
What is IBM Systems Director
Management Console (SDMC)?
The SDMC provides hardware, service,
and virtualization management for your Power Systems server.
The SDMC is the successor to the HMC and the Integrated
Virtualization Manager (IVM), and shows how IBM Systems Director is
going to take an increasingly important role for administrators. For
more information on SDMC, see this blog.
For more information about the IBM
PowerVM HMC data collector, see the TUAM
7.3 Information Center. The collector is available as part of
the TUAM 7.3.0 Enterprise Edition Base Collector Pack.
Unlock the Value of Virtualization with Integrated Service Management Whitepaper
Today IBM announced new SmartCloud Foundation capabilities to help organizations realize the potential of cloud computing. Watch the replay of the IBM SmartCloud launch webcast, to learn more about how the new announcements, including IBM SmartCloud Provisioning (delivered by IBM Service Agility Accelerator for Cloud), can help customers move beyond virtualization to more advanced cloud deployments.
My three favorite things about OpenStack are
- The People
- The Innovation
- The Interoperability
San Diego was my second OpenStack summit. Many of the same faces were in the design
summit sessions I attended, but there were many new faces as well. One of the most exciting observations from
the Folsom design summit was the incredible talent pool assembled. The Grizzly summit was no different – it’s
great to interact with so many incredibly smart, deep and experienced
people. I’m convinced that a single
company could never amass such a collection of quality talent for one
project. I guess it’s no wonder they’re
saying OpenStack is the fastest growing open source project ever.
I must apologize in advance, because I am sure to miss
someone, but I want to tell you about some of the people I interacted with in
the nova, glance, and cinder design sessions.
Over the past few months I’ve really been impressed with the PTL
leads. They’re very smart, highly
motivated, and excellent facilitators.
The design sessions invariably get into open debate, but productive
debate. I was impressed with the PTLs’
natural abilities to channel the discussion to bring out the key issues and
land on some concrete next steps.
I got to meet Microsoft’s Peter Pouliot, who’s heroic and
tenacious efforts successfully delivered HyperV support after a rather dodgey mess earlier in the year. Peter is not your stereotypical Microsoft
developer. He’s an open source guy
through and through. It’s clear that his
personal spirit had a lot to do with corralling the community to deliver
quality code in a very short time frame.
It was great to meet Peter and some of his non-Microsoft
colaborators. Great job guys!
I also had the pleasure to meet with some of VMware’s
developers and not just those acquired via the billion dollar Nicira
acquisition. The Nicira guys are great –
no question but I was also very pleased to meet the VMware developer who
completely rewrote the less than adequate VMware compute driver. I hope to work closely with them to ensure
the hypervisor is well supported and as interoperable as possible with other
properietary and open source technologies.
Of course, I can’t speak of OpenStackers without mentioning
RackSpace. Over the past two summits, I
got to interact with a number of RackSpace developers, aka Rackers. I got to hand it to them, they really do
have a great bunch of people and definitely
bring a massive scale service provider
perspective to the discussion. Of
course, being an IBMer myself, I can’t help but bring the enterprise customer
perspective into the mix. I think
OpenStack benefits greatly from these two perspectives brought together in open
OpenStack has done a great job defining an extensible
framework for IaaS. This flexibility not
only helps accommodate the varied needs from enterprise to service provider,
but it also enables a massive sea of innovation. Since the Nicira acquisition there’s been a
lot of attention on the innovation around software defined networking and
quantum, the OpenStack project that provides the abstraction for a variety of
implementations ranging from proprietary,
to pure open source like Open V Switch, to traditional standard
networking equipment. I think storage is
even hotter than networking these days with a slew of vendors combining
commodity 10Ge switches with commodity Intel servers with a combination of SSDs
and spinning disks to provide new approaches to storage for virtualized
environments. Of course software plays a
critical role in many of these virtualized storage solutions. Dreamhost’s open source distributed file system Ceph has been getting a lot of interest.
Enterprise storage vendors like NetApp, IBM, and HP have also
contributed cinder drivers to support their products within OpenStack
clouds. There were also a number of
summit discussions about exposing the different backend implementations of the
abstractions with different qualities of service. Some people, including one of my developers,
have begun to use “Volume Types” as a way to let users choose the kinds of
volumes they need. I believe this is
critical for compute clouds to cover the broadest spectrum of workloads. Of course this principle applies to other
resources and not just cinder volumes.
I saw a lightning talk about a nova driver for smartOS, a cool open source project from Joyent combining solaris zones, zfs, and kvm. There were ARM and Power CPU
support presented as well as a couple bare metal solutions. Intel, KVM, and OpenStack certainly make a
nice combination, but there’s so much more that’s possible with OpenStack
Finally, perhaps the most important thing about an OpenStack
cloud is interoperability. Starting with
the hypervisor, IBM has a solution that enables interoperability of images,
volumes, and networks across Xen, KVM, VMware and Hyper-V. We had a few sessions where we
discussed how we can bring the same interoperability to OpenStack. To start with, we need to be able to register
readonly cinder volumes as glance images.
Next to ensure we can scale out we need to be able to register multiple copies of the same image.
Finally, to take advantage of performance we need to abstract the clone
operation to enable Copy on Write (CoW), Copy on Read (CoR), as well as the
current local cache plus CoW mechanism for backwards compatibility and to
support 1Ge networks. Combining these
will enable images to work across multiple different hypervisors.
We also need interoperability with existing images which
means VMware and Amazon as the two most common forms of images. Today, it’s quite easy to automate simple
image formatting differences, but the challenge is in the assumptions made by
the images. The current direction for
OpenStack is to use config drive v2 to pass instance metadata to the
guest which is responsible to pull key system configuration such as hostnames,
credentials, and IP addresses. Typical VMware
images on the other hand generally expect either a push model, where the
hypervisor manipulates the filesystem prior to booting the image, or via their
guest agent, VMware tools.
To make matters worse, the current OpenStack assumes
different image formats for each supported hypervisor. One of the sad punchlines from Troy Torman’s keynote was that RackSpace’s private cloud distro named Alamo does not interoperate
with their publci cloud even though they’re both OpenStack. The good news is that, as Troy went on to
say, the time has come to focus on interoperability.
I got into a great conversation with Jesse Andrews, one of
the original OpenStack guys now at Nebula.
He described the approach to image interoperability by enabling cloud
operators to provide custom image workers at image ingestion time. This enables cloud providers to register
custom image processing code that gets called whenever an image is uploaded to
Glance. The simplest case of this is to
convert image formats to enable Alamo KVM images to run on RackSpace’s Xen
based public cloud.
Fortunately, IBM’s SmartCloud Provisioning (SCP) includes
some image management technologies which can help with the more challenging
problems mentioned above. Today’s SCP
2.1 will interrogate images in the library and check for cross hypervisor
compatibility. Users gain visibility
into this information and can optionally automate fixes wherever possible. We also use this technique to detect the
presence of a critical guest agent.
This brings me to one of my favorite little open source
projects, cloud-init created by Scot Moser at Canonical. If only it wasn’t GPL ;-). Many OpenStackers are using cloud-init to
automate the system configuration pull from config drive v2 mentioned above. This little bootstrap can do much much more,
but this is certainly a great job for this trusty little tool. Unfortunately, it’s only for linux. It’s even been made to work with fedora and
will likely be included in RHEL. Since
we cannot use GPL code in IBM products, we have a similar bootstrap for both
Windows and Linux guests. We’re working
with our lawyers to get approval to contribute this code to cloud-init. Of course,
if Canonical wants to use a more commercial friendly license like
OpenStack has done, then I could spend less time with lawyers and more time
hacking code ;-).
The beauty of this little bootstrap is its simplicity. This simplicity enables us to automatically
inject the bootstrap into Windows and Linux images. This will let us automatically fixup any old VMware,
or Hyper-V image so that it works on OpenStack.
This is a critical first step towards interoperability.
OpenStack is truly becoming an industry changing
and historic project. With so many
incredibly talented people from countless companies across the globe it’s no
wonder there is so much innovation in the community. I’m really happy to be a part of this growing
community. Together I believe we can
change the industry for the better. If
you would like to be part of this growing and innovative project, check out the
“community” link at www.openstack.org.
Also, we would like to invite you to check back here for future blogs on
OpenStack and IBM’s involvement. OpenStack
is a big part of IBM’s open cloud strategy and we want to be sure to keep you
up to date on our progress
As part of the transparent development initiative, IBM SmartCloud Provisioning (formerly known as IBM Service Agility Accelerator for Cloud) launches a series of daily demos, starting from November 7th. Every session will take about one hour.
In this way you can have a look in almost real time at what is happening in IBM SmartCloud Provisioning development, learn about new and enhanced capabilities.
If you are interested in joining the sessions, here is the schedule in Central European Time (CET):
- Monday at 4:00 PM
- Tuesday at 11:00 AM
- Wednesday at 4:00 PM
- Thursday at 5:00 PM
- Friday at 11:00 AM
The sessions will be focused on image management.
If you would like to join, using your web browser, connect to
No password is required
Ok, I admit, I was among the early adopters of the late nineties to
get hooked on VMWare. In fact, as an open source advocate I remember
playing with "
freemware", qemu, bochs, openVZ, and several
other x86 virtualization technologies. Likewise, I was among the first
to start using Amazon's Elastic Compute Cloud (EC2). I've been hooked
by x86 commodity hardware virtualization for a long time, and I thank
VMWare and Ed Bugnion in particular for that. But why choose VMWare
Ten years ago when the
CPUs made it hard to virtualize efficiently, VMWare was great. After
2003 if you were mostly interested in linux (king of the cloud) Xen was
an excellent open source alternative to virtualize x86 commodity
servers. In 2006 Amazon launched their EC2 service which would become
the defacto cloud standard. EC2 is built on Xen and is probably the
single biggest x86 virtualization environment in the world. Several
hundred thousand of my closest friends have found EC2 to be a fantastic
compute platform that goes beyond server virtualization, all without a
trace of VMWare. So why choose VMWare now?
modern CPUs include specific support for virtualization making it
easier to deliver efficient virtulaization without Xen's paravirt trick
or VMWare's innovative code patching. Current linux kernels include
support for KVM and I believe upstream kernels will again support Xen
natively. I remember when RedHat bought Qumranet,
developer of KVM, SPICE, and SolidICE (a desktop virtualization
technology) in 2008. Back then KVM didn't compare to VMWare. It
certainly was not "
back then. Three years later, KVM has matured extremely well. I think it really is "
for commodity OS virtualization. In my cloud development efforts I've
run hundreds of thousands of VMs on Xen and KVM during the past 2 1/2
years. While I really respect Xen, I've come to like and appreciate KVM
on modern CPUs since it's just so simple and easy to use. Today there
are so many "
choices for x86 virtualization from
Xen, KVM, and VirtualBox to Hyper-V, which Microsoft is practically
giving away just to keep Windows relevant in the datacenter. So why
choose VMWare now?
Is low end disruption
a threat for VMWare? Linux and Apache are certainly well established
in the datacenter preventing Microsoft's dominance over the desktop to
spill into the datacenter. Ten years ago when Windows had 90-something
percent market share of desktop computers, I myself considered Microsoft
an untouchable giant. Today, however, I think they're doomed because
Apple is cooler, all the kids have 'em along with iphones and and
ipads. By analogy, VMWare should be very concerned. IMHO, they can and
will lose their dominance and I think they'll do so by the classic Innovator's Dilemma
VMWare continues to cater to their traditional high end customers.
Meanwhile, nearly three quarters of a million developers are using
Amazon's cloud as their platform for new software applications and
services. And the best part is Amazon's cloud doesn't even need or use
VMWare. In fact, neither does Google's AppEngine or Microsoft's Azure.
Sense a pattern? If you believe, as I do, that we're on the cusp of a
new platform war to deliver the next generation of applications and
services, then the key to success is the application development
community. VMWare may have operations teams sold, but developers love
the cloud. Interestingly, they may not even have the ops guys sold
after all. Here's a forum thread titled "VMWare, a falling giant
"According to Ars Technica, 'A new survey seems to show that VMware's iron grip on the enterprise virtualization market is loosening,
with 38 percent of businesses planning to switch vendors within the
next year due to licensing models and the robustness of competing
hypervisors.' What do IT-savvy Slashdotters have to say about moving
away from one of the more stable and feature rich VM architectures
survey found that VMware is the primary hypervisor for server
virtualization in 67.6 percent of shops, followed by Microsoft's Hyper-V
with 16.4 percent and Citrix with 14.4 percent. Wow, this doesn't even
compare to Microsoft's former dominance for which I recall seeing
numbers as high as 98% market share!
So why choose VMWare now? Maybe the question should be, "
Have you tried an open source hypervisor lately?"
Or better yet, "
have you tried a public cloud yet"
Frankly, I don't even like using hypervisors directly anymore as I find
clouds much more powerful and easier to use. Why don't you give ISAAC a try
? You can see what a real cloud is like while also trying out open source hypervisors.
Interim fix 1 for IBM Service Agility Accelerator for Cloud 1.1 (
1.1.0-TIV-ISAAC-IF0001) has been published.
It addresses three defects related to the deployment of windows images.
More in details, if fixes the following issues:
- Launch windows persistent instance failed: after launching windows persistent instance, the instance will be in stopped status and not able to run.
- Not able to run persistent instance on VMware hyper node
- Lack of support for windows 7 and 2008 sysprep-ed images: not able to change the administrator's password if it is prepared by sysprep.
For further details read the readme file associated with the interim fix
The IBM Service Agility Accelerator for Cloud 1.1.0 customer interaction program has just concluded. The program ran throughout the development phase of the product.
Participants interacted with the development team via demo sessions and design reviews, and provided valuable feedback on the product's capability. Some participants engaged
further in the beta program to try out the product in house and share their feedback and experience with the development team during this hands on experience.
IBM will continue this flexible customer interaction program for the upcoming IBM Service Agility Accelerator for Cloud release.
The program is open to customers and business partners.
Join us if you would like to hear the latest news on the product, familiarize yourself with the newest capabilities, and help us improve the product quality and usability.
To join the program please contact firstname.lastname@example.org
IBM Service Agility Accelerator for Cloud V1.1.0 delivers an entry-level cloud solution to fulfill demand for rapid deployment of virtual machines (VM) and associated server images in high-volume or low-volume environments. It enables VM deployment and low-touch characteristics for businesses to provide cloud services within dynamically changing environments.
IBM Service Agility Accelerator for Cloud V1.1.0 capabilities for high-scale, low-touch VM deployment allows enterprises and cloud service providers to give their users fast and flexible access to VMs and storage on demand over the network. It is highly resilient both to failures of the service and to failures of the components underpinning the cloud infrastructure. Conflicts are reduced because load and demand balancing are carried out based on the real system state.
Differentiating characteristics of IBM Service Agility Accelerator for Cloud:
- Rapid image deployment
- Rapid automated recovery from provisioning errors since failures are retried automatically with little or no manual intervention (compared to hours or even days for traditional manual recovery)
- Scalable to handle high volumes of simultaneous requests, over 4,000 VMs per hour (compared to 20 to 200 VMs per hour using traditional deployment)
- Increased time-to-value implementation by leveraging virtual images to install the solution, which translates into an installed solution in hours compared to weeks or months using traditional software installation methods
- Designed to provide enhanced return on investment (ROI) for virtualized environments
- Can provide low cost, lightweight introduction to cloud technology with highly automated, fault tolerant, rapid deployment of VMs
- Can help improve performance against service level agreements (SLAs) through highly automated, fault tolerant parallel deployment of VMs
- Can help reduced operational expense through self-service UI, fault tolerance, and automatic error recovery
- Is designed to deliver rapid time to value and unmatched automation and performance.
More details here in the Announcement Letter
This week IBM is delivering enhanced capabilities to optimize IT service delivery through a new release of IBM Tivoli Service Automation Manager
--providing even greater value to customers. TSAM enables a more modern and dynamic data center, allowing users to request, deploy, monitor and manage cloud computing services. Key differentiators include solution transparency, ease of use and cost savings.The new release allows IT service providers to onboard multiple customers, deploy IT services very quickly across multiple platforms and hypervisors, maximize resource utilization and drive cloud operations effectiveness and efficiency by adding storage support and expanding on network integration. http://www-01.ibm.com/software/tivoli/products/service-auto-mgr/
The new capacity planning tool, now available in Beta, will unlock the value of Tivoli Monitoring and Warehouse and enable a rich set of analytics on the existing data. This new capability will enable you to
- Understand and plan your virtual environments pro-actively, initially for VMWare
- Minimize risks in your plan
- Optimize how you use capacity in the environment with intelligent workload sizing and placement
- Apply business and technical policies to keep your environment efficient and risk-free
- Make changes in a what-if analysis framework and view the impact of change.
The tool leverages Tivoli Integrated Portal (TIP) and Tivoli Common Reporting (TCR) with embedded Cognos reporting engine. It integrates with the ITM and TDW infrastructure to get configuration and usage data from your virtual infrastructure.
Here's a quick overview of the advanced planning scenarios you can now implement in your virtual environment using this tool.
Key Scenarios for a Capacity Analyst
- Planning for capacity growth: Let's suppose your business provides a forecast that will increase the load on the IT infrastructure in the coming months. The capacity analyst can model the increase in resource requirements from the existing VMs in the what-if planning tool, scope the part of the infrastructure to analyze, and automatically generate a plan to fit the increasing demand. If required, new servers can be added to handle the growth.
- Ensure compliance with defined capacity planning policies: The LOB and application owners often provide a list of their requirements to the capacity analyst in terms of how their workloads should be placed on the IT infrastructure. These are typically business guidelines to improve efficiency, reduce cost, respect organizational boundaries, or cut risks on a virtual infrastructure. For e.g. The Finance and Payroll apps may not share common hosts or may not want to share hosts among apps with different downtime requirements. There may also be technical policies that guide planning. For e.g. reduce license cost by putting OS images on fewer hosts or the DBA may want to keep some headroom for the database VMs. The tool can help to centralize the creation of such policies and select a subset to guide a what-if planning scenario.
- Avoid bottlenecks in your environment: IT Administrators can predict a bottleneck in a VMWare cluster that may not be fixed by dynamic allocation within the cluster. These are often long term issues as the cluster may be running VMs that are not the right combination to share resources dynamically. The planning capability may be used to recommend how VMs can be moved “across” clusters or clusters can be restructured to remove bottlenecks and optimize resources in a broader scope.
- Plan for new users into a Cloud environment: The Cloud administrators are often challenged with planning for new users on the shared infrastructure and do what-if analysis planning. With this tool, they can simulate new VMs on the discovered Cloud, add information regarding users, create policies specific to such users, and create a recommended new environment plan. The policies may simulate users that want dedicated hosts for their VMs, or some images that need specific types of hardware etc. The recommended plan can help them to understand how and where to add new hardware or consolidate VMs to free up fragmented Cloud resources.
- Plan for retiring or re-purposing hardware: The planning capability enables the user to add new information for the discovered environment. For example, a user can add warranty date information about the discovered hardware, often contained in spreadsheets or other tools, and then select hosts that are more than 5 years old in the planning tool. They can add new hardware from the catalog for a what-if scenario. The tool can then automatically generate an optimized plan on how the workloads from the old hardware will fit on the new hardware and how many new machines of what type are required.
There may be several other scenarios that one can come up with on this tool framework.
The planning tool also provides a workflow-driven UI with both fast-path and expert-mode options. The main workflow page is shown below with a 5-step approach to create optimized virtual environment plans with default options for several steps. One can iterate through these steps to reach the desired results.
Load the latest configuration data of the virtual environment for analysis
Set the time period to analyze historical data
Define the scope of hosts to analyze in the virtual environment
Size Virtual Machines in scope
Generate a placement plan for the virtual machines on the physical infrastructure in scope
An example recommendation output of the tool is shown below with interactive topology navigation capability, summary views, and risk scores assigned to the infrastructure elements. This is an actionable recommendation as one can take this structured output in an XML and write an adapter to trigger automation workflows that implement the recommendations. The example screen shows how we analyzed a cluster with 4 hosts and recommended a consolidation on 3 hosts.
The topology view is interactive as it allows the user to click on various nodes and visualize the summary of the infrastructure levels below the node. Risk levels of the nodes are shown as node colors.
Hope this will be an exciting set of functions to start with and we look forward to suggestions on feature improvements and scenarios. Please contact Gary Forghetti (email@example.com) to schedule a demo or sign up for the Beta version of the tool. We'll keep updating this forum with more and more details, such as demo videos, white papers etc.
Anindya Neogi, PhD | Senior Technical Staff Member | Tivoli Software | IBM Master Inventor
Was out for a few drinks last week with friends from the
local tech community where we try to solve the world’s problems. We were enjoying sitting in the sunshine and
warm weather on the patio of a local brewery discussing the interesting topics
of the week. Before we got into the
presidential election the talk turned to virtualization, of course, and the
current directions their companies are taking.
Everyone has heard about the pricing actions from VMware causing major
concerns from their customers, but these topics to me always seem some what
abstract until you hear about people taking action. Well, it turns out the guy who works at a
large tech company in Austin
was actually in the process of installing KVM to replace VMware in his area of
the company due to pricing issues. He
went on to say that a CIO friend of his at another company was in the process
of doing the same thing. That’s the
thing about IT people, once you upset them they can take decisive action to fix
the problem and they tend to have very long memories. I also came across this CNET
article sub-titled “Market research firm IDC says that data from a new survey
shows that "open cloud is key for 72 percent of customers." Clearly there seems to be a pricing level
that makes open cloud solutions look very attractive. As more companies try to balance the need for
virtualization and cloud with the costs of the solutions, I think the open
cloud will become more attractive.
IBM is working with OpenStack which is a global collaboration
of developers and cloud computing technologists producing the ubiquitous open
source cloud computing platform for public and private clouds. The OpenStack Summit, which is sold out, is
going on this week (October 15th) in San Diego, you can read about it and other
topics at the OpenStack blog. There are also some great insights on the
topic at the Linux Foundation blog from IBM’s
Angel Diaz: 3 Projects creating user-driven standards for the open cloud . I think this is definitely a space worth
watching…stay tuned for more updates from the patio…
Modified on by cynthyap
Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.
But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”
The need to orchestrate really becomes clear when various aspects of cloud management are brought together. The value to the organization at this stage of cloud is simplifying the management of automation – otherwise a balancing act to manage multiple hypervisors, resource usage, availability, scalability, performance and more -- based on business needs from the cloud, with the ultimate goal of delivering services faster.
With orchestration, the pieces are woven together and can be managed more effectively to ensure smooth and rapid service delivery -- and delivered in a user-friendly catalog of services easily accessible through a single pane of glass. In essence, cloud orchestration = automation + integration + best practices.
Without cloud orchestration, it’s difficult to realize the full benefits of cloud computing. The stitching together of best practices and automated tasks and processes becomes essential to optimize a wide spectrum of workloads types.
In addition to rapid service delivery, the benefit of orchestration is that there can be significant cost savings associated with labor and resources by eliminating manual intervention and management of varied IT resources or services.
Some key traits of cloud orchestration include:
• Integration of cloud capabilities across heterogeneous environments and infrastructures to simplify, automate and optimize service deployment
• Self-service portal for selection of cloud services, including storage and networking, from a predefined menu of offerings
• Reduced need for intervention to allow lower ratio of administrators to physical and virtual servers
• Automated high-scale provisioning and de-provisioning of resources with policy-based tools to manage virtual machine sprawl by reclaiming resources automatically
• Ability to integrate workflows and approval chains across technology silos to improve collaboration and reduce delays
• Real-time monitoring of physical and virtual cloud resources, as well as usage and accounting chargeback capabilities to track and optimize system usage
• Prepackaged automation templates and workflows for most common resource types to ease adoption of best practices and minimize transition time
In short, many of the capabilities that we associate with cloud computing are really elements of orchestration. In an orchestrated environment, organizations gain tools to manage their cloud workloads through a single interface, providing greater efficiency, control and scalability. As cloud environments become more complex and organizations seek greater benefit from their computing resources, the need for sophisticated management solutions that can orchestrate across the entire environment will become ever clearer.
Learn more about how cloud orchestration capabilities can help your business. And join the Cloud Provisioning and Orchestration development community to test out the latest cloud solutions and provide feedback to impact development.
With the proliferation of cloud computing, many businesses are starting to adopt a service provider model—either as a deliberate strategy to establish new revenue streams or, in some cases, inadvertently to support the growing needs of their organizations. This is especially true for companies with diverse needs, whether they’re tech companies with dev teams churning out new apps and services, or business owners driving requirements for SaaS services and cloud capabilities to enhance their data center operations.
In any event, the distinction between managed service providers (MSP) or cloud service providers (CSP), and companies growing in-house capabilities may not be as important as the common need to respond quickly and scale to support customer needs. The challenges facing all of these companies include facilitating the creation of new applications and services while maintaining quality of service, and the need for automation to reduce human resources and error from manual tasks—all with an eye to drive revenue and acquire new customers.
And so, the challenge for service providers of any kind is to increase scalability, automation and uptime while constraining costs. Companies are increasingly solving the critical piece of this puzzle by embracing rapid, high-scale provisioning and key cloud management capabilities to allow them to grow as quickly as their customers’ needs. In particular, the benefits accrue in four key areas.
First, applications can be deployed rapidly across private and public cloud resources.
Second, rich image management tools simplify complex and time consuming processes for creating virtual images and constraining image sprawl.
Third, operational costs can be lowered by leveraging existing hardware to support an array of virtual servers and diverse hypervisors.
And fourth, high-scale provisioning enables rapid response to changing business needs with near-instant deployment of hundreds of virtual machines.
While the spectrum of virtualization to orchestration functionality helps to manage their environments, high-scale provisioning in particular offers a cost-effective way to leverage capacity as a business commodity—a way for service providers to offer seemingly limitless capacity to their customers while lowering the relative costs of providing it.
In the case of Dutch Cloud, a CSP based in the Netherlands, a growing client base allowed the company to expand but it was very conscious of the costs and issues related to scalability, performance and security. By adopting a lightweight, high-scale provisioning solution for core service delivery, Dutch Cloud added capacity easily and was able to scale up rapidly without interruption to customer service. The CSP also reduced its administrative workload by 70 percent by adopting automation best practices. Monthly revenue has tripled twice in the last six months without an increase in operational costs.
Other service providers such as SLTN, a systems integrator serving large and mid-sized businesses, have experienced similar cost savings by extending platform managed services to a cloud delivery model. By implementing a low-touch, highly scalable cloud as its core delivery platform across multiple compute and storage nodes, SLTN was able to deploy new services in seconds rather than hours. It was also able to utilize existing commodity skills without significant training, integrate the existing mixed environment and minimize operational administration and maintenance. The underlying IaaS cloud capabilities allowed SLTN to be more efficient and to provide the full spectrum of cloud services to their own customers in a pay-as-you-go model—with better service and at a lower price point.
The benefits that these companies experienced are evidence that high-scale provisioning and cloud management capabilities can dramatically increase service capacity. For service providers of all stripes—whether deliberate or not—these benefits are a critical part of the evolution of cloud services and offer a meaningful way to deliver more value to themselves and their users.
Learn more about provisioning and orchestration capabilities
that are helping service providers to meet their growing business needs.
Service Health for IBM SmartCloud Provisioning has officially GA'ed and is now available on IBM Integrated Service Management Library ( ISML ).
Service Health provides pre-built integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring utilizing a custom agent, OS agents, and the ITMfVE agents. A product provided navigator offers a concise overview on the health of the IBM SmartCloud Provisioning infrastructure enabling the ability to identify and react to issues in your environment quickly minimizing the impact, such as an unresponsive compute node, high disk usage on storage nodes or key kernel services not responding. It also provides visibility into the KVM and ESXi hyper-visors.
This solution can be downloaded from the IBM Integrated Service Management Library( ISML ) following this link -> Service Health for IBM SmartCloud Provisioning
In a dynamic cloud environment standard concepts like IP addresses and storage volumes assume a special meaning when it comes to reserving and using them regardless of the virtual machines owned by a cloud user.
The concept of Elastic IP (EIP) and Elastic Block Storage (EBS) was initially introduced by Amazon EC2 as a way to decouple the resources assigned to a cloud user from their utilization. In other words, as a cloud user you can reserve an elastic resource and assign it to one of the VMs you own, but you can also re-assign it to a different VM whenever you need (for example, whenever you need to replace your VM with a new one).
SmartCloud Provisioning offers similar capabilities exposing the concepts of Static Addresses and Persistent Volumes that can be reserved and assigned to any running VMs.
A SmartCloud Provisioning address is a statically defined address which can be dynamically bound to any instance in the cloud. In other words, a static IP address is associated with your account, not with a particular instance, and you control that address until you choose to explicitly release it.
Let’s examine more in details how it works.
When SmartCloud Provisioning creates a VM, it assigns a dynamic IP address to it, on a default management sub-network. From this point on, the system always refers to the VM using the dynamic address assigned at boot time. Nonetheless, SmartCloud Provisioning offers to cloud users the possibility of assigning a different IP address, which can be seen as a reserved and static IP.
In order to achieve this result, a centralized pool of addresses is registered by the cloud administrator and stored in a durable data service. A cloud user can then reserve one or more addresses from this pool, and can associate one of them to a specific VM he owns. Note that the cloud user does not have any clue about which address will be reserved for him; he does not even know upfront if there is any static IP address left, until he sends the reservation request.
Once a static IP has been reserved and assigned to a VM, SmartCloud Provisioning internally creates a mapping between the default dynamic address associated to the selected VM and the reserved IP address. This translates into NAT rules on the host OS's iptables to forward all traffic to the private address of that VM.
In this way you can always refer to your VM using the static address, and even if you decide to re-create the VM, you can reassign that same address to the new VM.
The address remains in your reserved list as long as you need it, and you can release it when you no longer need it.
Persistent storage is critical to any non-trivial production application. Just as Amazon's EBS has proven to be extremely valuable, SmartCloud Provisioning persistent volumes are equally powerful, offering an off-instance storage that persists independently from the life of an instance. Users can create arbitrary numbers of arbitrarily sized persistent volumes. The volumes can be dynamically attached to any VM on the cloud as long as only one instance is attached at any time.
Once attached, a persistent volume appears to the guest OS like any other raw, unformatted block device.
Each persistent volume is assigned a UUID, which can be leveraged by the cloud user to track them.
RAID sets can be easily created together to ensure each volume is hosed on a separate physical host/device.
Multiple block devices will then be exposed to the guest OS which can establish their own raided meta-devices using tools like mdadm.
Behind the scenes, these block devices are very similar to the primary boot disk of a non persistent VM. However, these are read-write iSCSI devices and directly attached to the instance without leveraging Copy-on-Write. Note persistent block storage is also hosted on the same storage cluster used for master images.
Similarly to the static IP addresses, the persistent volumes are associated with your account, not with a particular instance, and you control them until you choose to explicitly delete them.
The persistent volumes allow you to keep your data separate from the OS, offering you the possibility to move them from a VM to another whenever you need. Moreover, they offer a valid mechanism to keep your data safe when dealing with VMs that do not have a dedicated persistent storage (the non-persistent VMs managed by SmartCloud Provisioning).
If you're interested in trying the SmartCloud Provisioning product, you can download a trial version from the following link: