Modified on by b1stern
I wanted to let everyone know that a Trial Virtual Machine is available for the SmartCloud Monitoring version 7.2 FP1 product. The Trial provides a 90 day trial of the software to monitor your virtualized environment and includes the Capacity Planning tools for VMware and PowerVM. These tools can help you optimize your virtualized environment and save money.
Within a few hours you can have the Virtual Machine up and running and monitoring your Virtualized environment.
This is a great tool if you are working with a customer on a Proof of Concept. Or, if you are customer, it is a really quick and easy way to evaluate the software.
The Trial includes the SmartCloud Monitoring product plus a little bit of extra content. It includes monitoring for:
PowerVM including (OS, VIOS, CEC, and HMC)
Citrix XenApp, XenDesktop, XenServer
Log File Monitoring
Agent Based and Agent-less Operating System monitoring
Integration with Tivoli Storage Productivity Center
Integration with IBM Systems Director
The trial also includes Predictive Analytics, Capacity Planning and Optimization for VMware and PowerVM
You can find the software at the following URL: https://www.ibm.com/services/forms/preLogin.do?source=swg-ibmscmpcvi2
If you have any questions or need assistance, you can send me an email at firstname.lastname@example.org
Is Cloud the Next Utility, hmmm there's an interesting thought. I know many of us are still trying to define "what is Cloud" and "what does Cloud mean for my business." I spend my days in IBM's Cloud & Smarter Infrastructure team working on bringing "Cloud Solutions" to our customers. This turns out to be simply helping customers use what they have, maybe with a little twist, to build out their data center in a more robust, efficient manner. It's all about helping them meet the needs of their business. IT teams have been doing derivations of "do more with less" for years and as technology matures, specifically with virtualization and the management of virtual environments, IT teams are able to improve their Quality of Service, reduce their Total Cost of Ownership, while increasing the services they offer their consumers. How?, you ask. IT managers everywhere are trying to figure out how to provide their services in a Utility-like format utilizing "the Cloud." Think about how natural it is for us to expect that flipping on a light switch, magically makes electricity flows and the light comes on; how about if we could select a service, magically systems are provisioned, business processes are established and the service is available - would that provide business value? I'll use this blog post to start you thinking about "Cloud as the Next Utility" and get you wondering if we do indeed have front row seats to a Cloud Computing Revolution.
Should get I my electricity from the local power company or do I invest in sustaining my own private electricity? As individuals focus on a greener way of life, they may be asking themselves that question as they focus on using what they have while minimizing their own carbon footprint. People have options that they can ponder over to figure out what is best for their families, their life and their utility usage.
Organizations are iterating over multiple decision points to provide the growing IT infrastructure requirements to run their business and it is continually influenced by many different things. Do they already have a large IT investment? Do their IT requirements expand and contract depending on current activities? Do they have a diverse investment and skills on existing service management tools? It's apparent that these same questions can be asked regardless of the size of your IT shop - big or small companies have come to depend on IT services and this dependency continues to grow in this computing revolution. One direction an IT manager might take is a hybrid approach. Such a strategy allows them to utilize their current investment of infrastructure, tools and skills to build out their own Private Cloud while keeping a Public Cloud vendor accessible when their computing needs spike. Companies have options that they can ponder over to figure out what is best for their company's business, their IT environments and their utility usage.
How is it that when I buy a new appliance I don't even think about if it'll will work with my electrical outlets when I get home. Standardization on how electricity is delivered and consumed is ubiquitous and expected worldwide. As consumers, we buy a utility and have an expectation that it will work with all our appliances to meet our needs.
Most companies have already made large investments in both infrastructure and tools, as well as the skills to maintain them. The need to deliver more with what is available, requires technology that can bridge from existing environments to new environments regardless of the influences. By adopting and adhering to computing standards, tooling can utilize data from existing systems while meeting the needs and different delivery methodologies of the business in new ways. This means I can ease the coordination of complex tasks and workflows in my data center while leveraging the existing skills, processes and technology artifacts to deliver services in new, different ways. By employing computing standards in my data center strategy, I can be assured that I can rely on Public Cloud for my processing spikes; I can utilize both my older infrastructure while integrating in new infrastructure; all with the same extensible tooling for deployment and management of compute, storage and networking. As you're defining the next milestones in your cloud strategy/roadmap, I would encourage you to investigate what is in use today in your data center and how you need to evolve/ reuse that investment. IBM offers SmartCloud Orchestrator to manage your existing datacenter(s). It provides capabilities based on industry standards and can extend your current processes to Cloud, helping you bring Cloud Services to your users with the same self-service manner as flipping on a light switch or plugging in a new appliance. A utility that will work to meet your users needs.
Until next time, keep your head in the Clouds.
The challenges of
virtualized environments are driving the shift to greater integration of
service management capabilities such as image and patch management, high-scale
provisioning, monitoring, storage and security. Join us for this webcast to learn how
organizations can realize the full benefits of virtualization to reduce
management costs, decrease deployment time, increase visibility into
performance and maximize utilization.
If you're in North America, register here for the April 16th session:
If you're in Asia Pacific, register for the April 23rd session:
Determine the right cloud orchestration strategy to
address the unique needs and pain points of your organization while increasing
productivity and spurring innovation. And learn more about the recently announced orchestration capabilities from IBM that leverage OpenStack to manage heterogeneous hybrid environments. Sign up today!
Even if you
weren’t at IBM Pulse, trending right now on the web is chatter about IBM’s
announcement to leverage open technologies pervasively in the development
of its cloud offerings.
SmartCloud Orchestrator—an integrated platform to standardize and manage
heterogeneous hybrid environments—IBM is launching its first commercial offering
based on OpenStack. And with SmartCloud Orchestrator, IBM is also redefining
the scope of orchestration
to encompass the streamlining and integration of all resources, workloads and
for this kind of capability is addressed in the latest IDC
report which discusses why it will become a priority as organizations look
to improve operational efficiency and reduce the mess and complexity of growing
to standardize and automate cloud services includes integrating performance and
capacity management, usage and accounting, and rich image lifecycle management.
In addition, services and tasks such as compute and storage provisioning,
configuration of network devices, integration with service request and change
management systems and processes can all be streamlined. Out-of-the-box robust
workload patterns also enable fast development of cloud services.
SmartCloud Orchestrator, it’s all brought together to seamlessly manage
heterogeneous environments, allowing organizations to build on existing
investments and open source technologies.
haven’t had time to catch up on what’s trending, here’s the short version on
how IBM is helping to advance
the cloud to drive innovation.
TUC Webcast: Maximize the Benefits of Virtualization for Greater ROI
Please join the Tivoli User Community
for a live Webcast and opportunity for Q&A, Thursday, February 21, 2013 at 11:00 AM ET USA.
Click here to reserve your webcast seat now
About this Webcast:
Challenges with virtualized environments are driving the shift to
increased integration of service management capabilities such as image
and patch management, high-scale provisioning, monitoring, storage and
security. In this webcast, learn how organizations can realize the full
benefits of virtualization including reduced management costs, decreased
deployment time, increased visibility into performance and maximized
About the Speaker:
Matthew Rodkey, Product Manager focusing on Tivoli Cloud Solutions
Matt Rodkey is a Product Manager focusing on Tivoli Cloud Solutions.
In 13 years with IBM, Matt has worked in a number of areas in the Tivoli
portfolio including Security, Monitoring, and Service Delivery.
The Official Tivoli User Community is the largest online and offline
organization of Tivoli professionals in the world – home to over 160 local User
Communities and dozens of virtual/global groups from 29 countries – with more
than 26,000 members. The TUC community offers Users blogs and forums for
discussion and collaboration, access to the latest whitepapers, webinars,
presentations and research for Users, by Users and the latest information on
Tivoli products. The Tivoli User Community offers the opportunity to
learn and collaborate on the latest topics and issues that matter most.
Membership is complimentary. Join NOW!
Please join the Tivoli User
Community for their next webcast on IBM Smartcloud Consumer Monitoring. This webcast will be held on
Tuesday, January 22, 2013 at 11:00 AM ET, USA.
IBM SmartCloud Consumer Monitoring is a new product
developed for cloud consumers and service providers. An innovative new architecture embeds
monitoring technology in library images, so newly deployed VMs are discovered
and monitored within seconds of being instantiated. “Fabric Nodes” used innovative distributed
database technology to display data for nodes and applications, or logical
groupings of them, and run as virtual machines alongside the application
VMs. New fabric nodes come online as
needed, and shut themselves down when no longer needed, ensuring optimum use of
ABOUT THE SPEAKER:
Ben Stern, Executive IT Specialist, IBM Cloud & Virtualization products
Stern is an Executive IT Specialist.
For the past several years, he has defined Best Practices for Tivoli's
SAPM portfolio. Most recently, he has
taken on Best Practices role for the Cloud and Virtualization products.
The Official Tivoli User Community is the largest online and offline
organization of Tivoli professionals in the world – home to over 160 local User
Communities and dozens of virtual/global groups from 29 countries – with more
than 26,000 members. The TUC community offers Users blogs and forums for
discussion and collaboration, access to the latest whitepapers, webinars, presentations
and research for Users, by Users and the latest information on Tivoli
products. The Tivoli User Community offers the opportunity to learn and
collaborate on the latest topics and issues that matter most. Membership
is complimentary. Join NOW!
1. On VM console menu navigate to VM > Guest > Install /upgrade VMware Tools
2. open the VMware installables folder in terminal. Run command rpm -i VMwareTools-4.0.0-261974.tar.gz
3. Run /usr/bin/vmware-config-tools.pl
The steps mentioned above usually work fine for 64 bit OS. However today I had to create a 32 bit RHEL6.1 OS. Faced couple of issues starting from :-
1. gcc not installed. Install the OS complaint rpm.
2. Kernel headers files not found in /usr/include. After a little googling, I found the solution to this issue.
i. run command uname -r
ii. install rpm -ivh kernel-devel-<of the version found in command above>
iii. ls -d /usr/src/kernels/$(uname -r)*/include gives us the kernel header files path which we can then feed to vmware-config-tools.pl prompt.
Even though server proliferation can be partially addressed through virtualization, the usage of virtual and physical assets becomes complex to accurately assess or manage. Cost management is crucial to integrate into overall service management, especially with a move into cloud. This webcast discusses how to implement a financial management roadmap and the key requirements for cloud transparency-- the ability to allocate IT costs, usage, and value.
My three favorite things about OpenStack are
- The People
- The Innovation
- The Interoperability
San Diego was my second OpenStack summit. Many of the same faces were in the design
summit sessions I attended, but there were many new faces as well. One of the most exciting observations from
the Folsom design summit was the incredible talent pool assembled. The Grizzly summit was no different – it’s
great to interact with so many incredibly smart, deep and experienced
people. I’m convinced that a single
company could never amass such a collection of quality talent for one
project. I guess it’s no wonder they’re
saying OpenStack is the fastest growing open source project ever.
I must apologize in advance, because I am sure to miss
someone, but I want to tell you about some of the people I interacted with in
the nova, glance, and cinder design sessions.
Over the past few months I’ve really been impressed with the PTL
leads. They’re very smart, highly
motivated, and excellent facilitators.
The design sessions invariably get into open debate, but productive
debate. I was impressed with the PTLs’
natural abilities to channel the discussion to bring out the key issues and
land on some concrete next steps.
I got to meet Microsoft’s Peter Pouliot, who’s heroic and
tenacious efforts successfully delivered HyperV support after a rather dodgey mess earlier in the year. Peter is not your stereotypical Microsoft
developer. He’s an open source guy
through and through. It’s clear that his
personal spirit had a lot to do with corralling the community to deliver
quality code in a very short time frame.
It was great to meet Peter and some of his non-Microsoft
colaborators. Great job guys!
I also had the pleasure to meet with some of VMware’s
developers and not just those acquired via the billion dollar Nicira
acquisition. The Nicira guys are great –
no question but I was also very pleased to meet the VMware developer who
completely rewrote the less than adequate VMware compute driver. I hope to work closely with them to ensure
the hypervisor is well supported and as interoperable as possible with other
properietary and open source technologies.
Of course, I can’t speak of OpenStackers without mentioning
RackSpace. Over the past two summits, I
got to interact with a number of RackSpace developers, aka Rackers. I got to hand it to them, they really do
have a great bunch of people and definitely
bring a massive scale service provider
perspective to the discussion. Of
course, being an IBMer myself, I can’t help but bring the enterprise customer
perspective into the mix. I think
OpenStack benefits greatly from these two perspectives brought together in open
OpenStack has done a great job defining an extensible
framework for IaaS. This flexibility not
only helps accommodate the varied needs from enterprise to service provider,
but it also enables a massive sea of innovation. Since the Nicira acquisition there’s been a
lot of attention on the innovation around software defined networking and
quantum, the OpenStack project that provides the abstraction for a variety of
implementations ranging from proprietary,
to pure open source like Open V Switch, to traditional standard
networking equipment. I think storage is
even hotter than networking these days with a slew of vendors combining
commodity 10Ge switches with commodity Intel servers with a combination of SSDs
and spinning disks to provide new approaches to storage for virtualized
environments. Of course software plays a
critical role in many of these virtualized storage solutions. Dreamhost’s open source distributed file system Ceph has been getting a lot of interest.
Enterprise storage vendors like NetApp, IBM, and HP have also
contributed cinder drivers to support their products within OpenStack
clouds. There were also a number of
summit discussions about exposing the different backend implementations of the
abstractions with different qualities of service. Some people, including one of my developers,
have begun to use “Volume Types” as a way to let users choose the kinds of
volumes they need. I believe this is
critical for compute clouds to cover the broadest spectrum of workloads. Of course this principle applies to other
resources and not just cinder volumes.
I saw a lightning talk about a nova driver for smartOS, a cool open source project from Joyent combining solaris zones, zfs, and kvm. There were ARM and Power CPU
support presented as well as a couple bare metal solutions. Intel, KVM, and OpenStack certainly make a
nice combination, but there’s so much more that’s possible with OpenStack
Finally, perhaps the most important thing about an OpenStack
cloud is interoperability. Starting with
the hypervisor, IBM has a solution that enables interoperability of images,
volumes, and networks across Xen, KVM, VMware and Hyper-V. We had a few sessions where we
discussed how we can bring the same interoperability to OpenStack. To start with, we need to be able to register
readonly cinder volumes as glance images.
Next to ensure we can scale out we need to be able to register multiple copies of the same image.
Finally, to take advantage of performance we need to abstract the clone
operation to enable Copy on Write (CoW), Copy on Read (CoR), as well as the
current local cache plus CoW mechanism for backwards compatibility and to
support 1Ge networks. Combining these
will enable images to work across multiple different hypervisors.
We also need interoperability with existing images which
means VMware and Amazon as the two most common forms of images. Today, it’s quite easy to automate simple
image formatting differences, but the challenge is in the assumptions made by
the images. The current direction for
OpenStack is to use config drive v2 to pass instance metadata to the
guest which is responsible to pull key system configuration such as hostnames,
credentials, and IP addresses. Typical VMware
images on the other hand generally expect either a push model, where the
hypervisor manipulates the filesystem prior to booting the image, or via their
guest agent, VMware tools.
To make matters worse, the current OpenStack assumes
different image formats for each supported hypervisor. One of the sad punchlines from Troy Torman’s keynote was that RackSpace’s private cloud distro named Alamo does not interoperate
with their publci cloud even though they’re both OpenStack. The good news is that, as Troy went on to
say, the time has come to focus on interoperability.
I got into a great conversation with Jesse Andrews, one of
the original OpenStack guys now at Nebula.
He described the approach to image interoperability by enabling cloud
operators to provide custom image workers at image ingestion time. This enables cloud providers to register
custom image processing code that gets called whenever an image is uploaded to
Glance. The simplest case of this is to
convert image formats to enable Alamo KVM images to run on RackSpace’s Xen
based public cloud.
Fortunately, IBM’s SmartCloud Provisioning (SCP) includes
some image management technologies which can help with the more challenging
problems mentioned above. Today’s SCP
2.1 will interrogate images in the library and check for cross hypervisor
compatibility. Users gain visibility
into this information and can optionally automate fixes wherever possible. We also use this technique to detect the
presence of a critical guest agent.
This brings me to one of my favorite little open source
projects, cloud-init created by Scot Moser at Canonical. If only it wasn’t GPL ;-). Many OpenStackers are using cloud-init to
automate the system configuration pull from config drive v2 mentioned above. This little bootstrap can do much much more,
but this is certainly a great job for this trusty little tool. Unfortunately, it’s only for linux. It’s even been made to work with fedora and
will likely be included in RHEL. Since
we cannot use GPL code in IBM products, we have a similar bootstrap for both
Windows and Linux guests. We’re working
with our lawyers to get approval to contribute this code to cloud-init. Of course,
if Canonical wants to use a more commercial friendly license like
OpenStack has done, then I could spend less time with lawyers and more time
hacking code ;-).
The beauty of this little bootstrap is its simplicity. This simplicity enables us to automatically
inject the bootstrap into Windows and Linux images. This will let us automatically fixup any old VMware,
or Hyper-V image so that it works on OpenStack.
This is a critical first step towards interoperability.
OpenStack is truly becoming an industry changing
and historic project. With so many
incredibly talented people from countless companies across the globe it’s no
wonder there is so much innovation in the community. I’m really happy to be a part of this growing
community. Together I believe we can
change the industry for the better. If
you would like to be part of this growing and innovative project, check out the
“community” link at www.openstack.org.
Also, we would like to invite you to check back here for future blogs on
OpenStack and IBM’s involvement. OpenStack
is a big part of IBM’s open cloud strategy and we want to be sure to keep you
up to date on our progress
Was out for a few drinks last week with friends from the
local tech community where we try to solve the world’s problems. We were enjoying sitting in the sunshine and
warm weather on the patio of a local brewery discussing the interesting topics
of the week. Before we got into the
presidential election the talk turned to virtualization, of course, and the
current directions their companies are taking.
Everyone has heard about the pricing actions from VMware causing major
concerns from their customers, but these topics to me always seem some what
abstract until you hear about people taking action. Well, it turns out the guy who works at a
large tech company in Austin
was actually in the process of installing KVM to replace VMware in his area of
the company due to pricing issues. He
went on to say that a CIO friend of his at another company was in the process
of doing the same thing. That’s the
thing about IT people, once you upset them they can take decisive action to fix
the problem and they tend to have very long memories. I also came across this CNET
article sub-titled “Market research firm IDC says that data from a new survey
shows that "open cloud is key for 72 percent of customers." Clearly there seems to be a pricing level
that makes open cloud solutions look very attractive. As more companies try to balance the need for
virtualization and cloud with the costs of the solutions, I think the open
cloud will become more attractive.
IBM is working with OpenStack which is a global collaboration
of developers and cloud computing technologists producing the ubiquitous open
source cloud computing platform for public and private clouds. The OpenStack Summit, which is sold out, is
going on this week (October 15th) in San Diego, you can read about it and other
topics at the OpenStack blog. There are also some great insights on the
topic at the Linux Foundation blog from IBM’s
Angel Diaz: 3 Projects creating user-driven standards for the open cloud . I think this is definitely a space worth
watching…stay tuned for more updates from the patio…
It’s been estimated that the number of virtual machines in data centers has increased at least tenfold in the last decade. More than fifty percent of virtualized environments now have more than one brand of hypervisor. The hypervisor promise of cutting infrastructure expense has given way to increases in licensing costs of more than three hundred percent. And the average number of images destroyed? Nobody knows.
In short, the challenges of managing virtualized environments are mounting. The benefits of virtualization—from cost and labor savings to increased efficiency—are being threatened by its staggering growth and the resultant complexity.
A critical piece to solving these challenges, as many organizations have already discovered, is image management. While there are many ad hoc and isolated solutions, there continues to be a real need for comprehensive image lifecycle management to combat image sprawl, get more visibility and analysis into where images are stored and how they are being used, and to ensure security through timely patching of images. This doesn’t necessarily mean jumping to cloud solutions, especially for businesses that aren’t ready to adopt cloud orchestration
yet, but rather, implementing image capabilities in the virtualized environment that are robust enough to help with high-value applications and the on-ramp to advanced cloud capabilities.
Simplifying image creation and deployment
Because images are easy to create and copy, it’s often difficult to decipher which images are crucial, where there is redundancy and where there may be a need for more governance. It is also an ongoing challenge to understand what an image consists of without launching it. This image complexity has resulted in IT spending a significant portion of their time on mundane or repetitive tasks such as manually building images and maintaining an image library.
Inserting automation best practices into the process of creating, deploying and managing images can result in immediate time and labor savings, with as much as 40-80% labor cost reduction by increasing image/admin ratio efficiency. Automation also helps to optimize the efficiency and accuracy of service delivery in the data center.
Once images are captured they can be deployed as often as needed. Paired with robust, automated, high-scale provisioning
, hundreds of new virtual machines can be deployed in minutes, increasing IT efficiency. They can also be customized based on user needs.
Improving image analysis
Key to effective image analysis (including image search, drift, version control and image vulnerability) is the use of a federated image library, which pulls together the storage and meta information of images across multiple image repositories and hypervisors.Image search:
With a large amount of image information to contain and understand, it can become difficult to determine the connection between images or their origin. A family-tree hierarchy and grouping of images with version chains simplifies image search by showing how images are linked, when they are in use and where they originated, even in a mixed hypervisor environment. Additionally, searching capabilities within images drastically reduces the complexity of finding the right image and associated information about it.Image drift:
Varying image iterations make it difficult to manage compliance and version control. Frequently, administrators are forced to maintain volumes of duplicate and unnecessary images because it is difficult to ascertain the need, use or ownership of images. Advanced image management can increase visibility into what is inside a virtual machine through a centralized image library, to determine opportunities to consolidate images, or determine if there are security threats from vulnerable images.
Increasing security with image patching
With the explosion of images to govern, there is a need to be able to detect vulnerability exposures in images to ensure that no virtual machines are created without the proper level of security patches. All systems, both physical and virtual, need to be patched whether they are distributed or part of the cloud. A simplified, automated patching process can administer virtual images from a single console so you have the scalability to patch as quickly as you can provision
, allowing users to maintain golden and copied images in a patched state. With this patching capability, policy enforcement can be accomplished and proven in minutes instead of days, and IT can increase the accuracy and speed of patching enforcement, achieving as much as 98% first pass patch success rate in hours.
The benefits of a comprehensive, integrated image management solution are immediately obvious. Best of all, there is a high degree of reward with very little risk.
And with image sprawl under control, organizations can expand capabilities for richer end-to-end service management across the virtualized infrastructure such as performance management and data protection as well as look to higher value cloud capabilities for faster service delivery.
Join us on
the Tivoli User Community webcast and opportunity for questions,
Tuesday, September 25th at 11:00 AM, ET, USA
Reserve Your Webcast Seat Now
While virtualization can produce significant cost savings as a result of
reducing infrastructure overhead, it does not address the single-largest cost
element for most data centers—the labor to manage this environment—which can be
as high as 40 percent of the overall cost. If not controlled, management costs
can negate the cost savings realized through virtualization. Virtualization
enables mobility of systems and flexible deployment and re-deployment of
systems. Manually tracking software stacks and configurations of VMs and images
becomes increasingly difficult and there is a need for provisioning automation
webcast, learn how to accelerate application deployment and increase business
agility by leveraging virtualization and building simple, scalable cloud.
Speaker: Matt Rodkey, Product Manager, Cloud Mgmt.
Rodkey is a Product Manager focusing on Tivoli Cloud Solutions. In 13 years with IBM, Matt has worked in a
number of areas in the Tivoli portfolio including Security, Monitoring, and
Service Delivery. Click Here to view his TUC Profile
Official Tivoli User Community is the largest online and offline organization
of Tivoli professionals in the world – home to over 160 local User Communities
and dozens of virtual/global groups from 29 countries – with more than 26,000
members. The TUC community offers Users
blogs and forums for discussion and collaboration, access to the latest
whitepapers, webinars, presentations and research for Users, by Users and the
latest information on Tivoli products.
The Tivoli User Community offers the opportunity to learn and
collaborate on the latest topics and issues that matter most. Membership is complimentary. Join NOW!
Modified on by cynthyap
Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.
But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”
The need to orchestrate really becomes clear when various aspects of cloud management are brought together. The value to the organization at this stage of cloud is simplifying the management of automation – otherwise a balancing act to manage multiple hypervisors, resource usage, availability, scalability, performance and more -- based on business needs from the cloud, with the ultimate goal of delivering services faster.
With orchestration, the pieces are woven together and can be managed more effectively to ensure smooth and rapid service delivery -- and delivered in a user-friendly catalog of services easily accessible through a single pane of glass. In essence, cloud orchestration = automation + integration + best practices.
Without cloud orchestration, it’s difficult to realize the full benefits of cloud computing. The stitching together of best practices and automated tasks and processes becomes essential to optimize a wide spectrum of workloads types.
In addition to rapid service delivery, the benefit of orchestration is that there can be significant cost savings associated with labor and resources by eliminating manual intervention and management of varied IT resources or services.
Some key traits of cloud orchestration include:
• Integration of cloud capabilities across heterogeneous environments and infrastructures to simplify, automate and optimize service deployment
• Self-service portal for selection of cloud services, including storage and networking, from a predefined menu of offerings
• Reduced need for intervention to allow lower ratio of administrators to physical and virtual servers
• Automated high-scale provisioning and de-provisioning of resources with policy-based tools to manage virtual machine sprawl by reclaiming resources automatically
• Ability to integrate workflows and approval chains across technology silos to improve collaboration and reduce delays
• Real-time monitoring of physical and virtual cloud resources, as well as usage and accounting chargeback capabilities to track and optimize system usage
• Prepackaged automation templates and workflows for most common resource types to ease adoption of best practices and minimize transition time
In short, many of the capabilities that we associate with cloud computing are really elements of orchestration. In an orchestrated environment, organizations gain tools to manage their cloud workloads through a single interface, providing greater efficiency, control and scalability. As cloud environments become more complex and organizations seek greater benefit from their computing resources, the need for sophisticated management solutions that can orchestrate across the entire environment will become ever clearer.
Learn more about how cloud orchestration capabilities can help your business. And join the Cloud Provisioning and Orchestration development community to test out the latest cloud solutions and provide feedback to impact development.
With the proliferation of cloud computing, many businesses are starting to adopt a service provider model—either as a deliberate strategy to establish new revenue streams or, in some cases, inadvertently to support the growing needs of their organizations. This is especially true for companies with diverse needs, whether they’re tech companies with dev teams churning out new apps and services, or business owners driving requirements for SaaS services and cloud capabilities to enhance their data center operations.
In any event, the distinction between managed service providers (MSP) or cloud service providers (CSP), and companies growing in-house capabilities may not be as important as the common need to respond quickly and scale to support customer needs. The challenges facing all of these companies include facilitating the creation of new applications and services while maintaining quality of service, and the need for automation to reduce human resources and error from manual tasks—all with an eye to drive revenue and acquire new customers.
And so, the challenge for service providers of any kind is to increase scalability, automation and uptime while constraining costs. Companies are increasingly solving the critical piece of this puzzle by embracing rapid, high-scale provisioning and key cloud management capabilities to allow them to grow as quickly as their customers’ needs. In particular, the benefits accrue in four key areas.
First, applications can be deployed rapidly across private and public cloud resources.
Second, rich image management tools simplify complex and time consuming processes for creating virtual images and constraining image sprawl.
Third, operational costs can be lowered by leveraging existing hardware to support an array of virtual servers and diverse hypervisors.
And fourth, high-scale provisioning enables rapid response to changing business needs with near-instant deployment of hundreds of virtual machines.
While the spectrum of virtualization to orchestration functionality helps to manage their environments, high-scale provisioning in particular offers a cost-effective way to leverage capacity as a business commodity—a way for service providers to offer seemingly limitless capacity to their customers while lowering the relative costs of providing it.
In the case of Dutch Cloud, a CSP based in the Netherlands, a growing client base allowed the company to expand but it was very conscious of the costs and issues related to scalability, performance and security. By adopting a lightweight, high-scale provisioning solution for core service delivery, Dutch Cloud added capacity easily and was able to scale up rapidly without interruption to customer service. The CSP also reduced its administrative workload by 70 percent by adopting automation best practices. Monthly revenue has tripled twice in the last six months without an increase in operational costs.
Other service providers such as SLTN, a systems integrator serving large and mid-sized businesses, have experienced similar cost savings by extending platform managed services to a cloud delivery model. By implementing a low-touch, highly scalable cloud as its core delivery platform across multiple compute and storage nodes, SLTN was able to deploy new services in seconds rather than hours. It was also able to utilize existing commodity skills without significant training, integrate the existing mixed environment and minimize operational administration and maintenance. The underlying IaaS cloud capabilities allowed SLTN to be more efficient and to provide the full spectrum of cloud services to their own customers in a pay-as-you-go model—with better service and at a lower price point.
The benefits that these companies experienced are evidence that high-scale provisioning and cloud management capabilities can dramatically increase service capacity. For service providers of all stripes—whether deliberate or not—these benefits are a critical part of the evolution of cloud services and offer a meaningful way to deliver more value to themselves and their users.
Learn more about provisioning and orchestration capabilities
that are helping service providers to meet their growing business needs.
The solution Endpoint security for SmartCloud Provisioning v2.1
has been published on IBM Integrated Service Management Library (ISML).
The purpose of Endpoint security for SmartCloud Provisioning v2.1 is to demonstrate how IBM Tivoli Endpoint Manager can be integrated with the IBM SmartCloud Provisioning Infrastructure.
Endpoint security for SmartCloud Provisioning will generate the components required by IBM SmartCloud Provisioning 2.1 to automatically install IBM Tivoli Endpoint Manager agents when deploying virtual systems. This will allow cloud administrators to easily maintain compliance over their virtualized network. IBM SmartCloud Provisioning v2.1
as well as IBM Tivoli Endpoint Manager v8.2
need to be available. If you are participating in the IBM SmartCloud Provisioning v2.1 beta and have IBM Tivoli Endpoint Manager, consider using Endpoint security as well.
We know that cloud computing offers a myriad of benefits like rapid service delivery and lower operating costs. But it can also lead to challenges in data governance, access control, activity monitoring and visibility of dynamic resources—in essence, all aspects of IT security.
The IT organization must have the capabilities to both deliver services more quickly to meet the demands of the business and be able to provide high levels of security and compliance. In the past the delivery of the services was typically the bottleneck in providing new services, but now with automated cloud and self service delivery models the teams responsible for change management and security can quickly become the bottleneck due to manual processes and siloed tools.
For example, organizations need the ability to patch all of their systems, both physical and virtual, whether distributed or part of a cloud. Operations teams need better insight into and control of deployed virtual systems, including OS patch levels, installed middleware applications and related security configurations. And there can be too many security exposures with offline and suspended VM’s that haven’t been patched in weeks or months.
A holistic approach is needed that addresses rapid provisioning of services and automation of key security and compliance requirements. Together these capabilities can keep you in control of rapidly changing cloud environments. First let’s look at the capabilities needed in a cloud provisioning solution.
Cloud provisioning should combine application and image provisioning for workload optimized clouds and deliver:
· Reduced costs with automated high-scale provisioning; multiple hypervisor options and HW of choice
· Accelerated time-to-market with standardized pattern-based deployment for workload optimized cloud
· Image sprawl prevention with in-built advanced image lifecycle management capabilities
· Ease of adoption and clear roadmap to move to advanced cloud capabilities
Second, a unified endpoint management approach is required to provide visibility and control of your systems, regardless of context, location or connectivity, and needs to deliver:
· Heterogeneous platform support with seamless patch management for multiple operating systems, including Microsoft Windows, Unix, Linux and Mac OS, as well as hypervisor platforms
· Automatic assessment and “single click” remediation, which shortens time to compliance by automatically identifying necessary patches and enabling users to target and remediate endpoints quickly
· Enterprise-class scalability and security to provide proven scalability, including fine-grained authorization and access control capabilities
Explore these capabilities with the new IBM SmartCloud Patch Management.
Service Health for IBM SmartCloud Provisioning has officially GA'ed and is now available on IBM Integrated Service Management Library ( ISML ).
Service Health provides pre-built integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring utilizing a custom agent, OS agents, and the ITMfVE agents. A product provided navigator offers a concise overview on the health of the IBM SmartCloud Provisioning infrastructure enabling the ability to identify and react to issues in your environment quickly minimizing the impact, such as an unresponsive compute node, high disk usage on storage nodes or key kernel services not responding. It also provides visibility into the KVM and ESXi hyper-visors.
This solution can be downloaded from the IBM Integrated Service Management Library( ISML ) following this link -> Service Health for IBM SmartCloud Provisioning
In a dynamic cloud environment standard concepts like IP addresses and storage volumes assume a special meaning when it comes to reserving and using them regardless of the virtual machines owned by a cloud user.
The concept of Elastic IP (EIP) and Elastic Block Storage (EBS) was initially introduced by Amazon EC2 as a way to decouple the resources assigned to a cloud user from their utilization. In other words, as a cloud user you can reserve an elastic resource and assign it to one of the VMs you own, but you can also re-assign it to a different VM whenever you need (for example, whenever you need to replace your VM with a new one).
SmartCloud Provisioning offers similar capabilities exposing the concepts of Static Addresses and Persistent Volumes that can be reserved and assigned to any running VMs.
A SmartCloud Provisioning address is a statically defined address which can be dynamically bound to any instance in the cloud. In other words, a static IP address is associated with your account, not with a particular instance, and you control that address until you choose to explicitly release it.
Let’s examine more in details how it works.
When SmartCloud Provisioning creates a VM, it assigns a dynamic IP address to it, on a default management sub-network. From this point on, the system always refers to the VM using the dynamic address assigned at boot time. Nonetheless, SmartCloud Provisioning offers to cloud users the possibility of assigning a different IP address, which can be seen as a reserved and static IP.
In order to achieve this result, a centralized pool of addresses is registered by the cloud administrator and stored in a durable data service. A cloud user can then reserve one or more addresses from this pool, and can associate one of them to a specific VM he owns. Note that the cloud user does not have any clue about which address will be reserved for him; he does not even know upfront if there is any static IP address left, until he sends the reservation request.
Once a static IP has been reserved and assigned to a VM, SmartCloud Provisioning internally creates a mapping between the default dynamic address associated to the selected VM and the reserved IP address. This translates into NAT rules on the host OS's iptables to forward all traffic to the private address of that VM.
In this way you can always refer to your VM using the static address, and even if you decide to re-create the VM, you can reassign that same address to the new VM.
The address remains in your reserved list as long as you need it, and you can release it when you no longer need it.
Persistent storage is critical to any non-trivial production application. Just as Amazon's EBS has proven to be extremely valuable, SmartCloud Provisioning persistent volumes are equally powerful, offering an off-instance storage that persists independently from the life of an instance. Users can create arbitrary numbers of arbitrarily sized persistent volumes. The volumes can be dynamically attached to any VM on the cloud as long as only one instance is attached at any time.
Once attached, a persistent volume appears to the guest OS like any other raw, unformatted block device.
Each persistent volume is assigned a UUID, which can be leveraged by the cloud user to track them.
RAID sets can be easily created together to ensure each volume is hosed on a separate physical host/device.
Multiple block devices will then be exposed to the guest OS which can establish their own raided meta-devices using tools like mdadm.
Behind the scenes, these block devices are very similar to the primary boot disk of a non persistent VM. However, these are read-write iSCSI devices and directly attached to the instance without leveraging Copy-on-Write. Note persistent block storage is also hosted on the same storage cluster used for master images.
Similarly to the static IP addresses, the persistent volumes are associated with your account, not with a particular instance, and you control them until you choose to explicitly delete them.
The persistent volumes allow you to keep your data separate from the OS, offering you the possibility to move them from a VM to another whenever you need. Moreover, they offer a valid mechanism to keep your data safe when dealing with VMs that do not have a dedicated persistent storage (the non-persistent VMs managed by SmartCloud Provisioning).
If you're interested in trying the SmartCloud Provisioning product, you can download a trial version from the following link:
I really liked the post “Rapid deployments with IBM
SmartCloud Provisioning” that explains
how simple and fast it is to deploy instances using SmartCloud Provisioning.
But if is also important to have a flexible way for passing parameters during
the deployment in order to configure and/or customize the deployed instances.
IBM SmartCloud Provisioning provides in the
launch instance panel, and also using the CLI, the “user_data” text field that
can be used for this scope.
inspired to the Amazon EC2 instance metadata and here you can find an interesting
article on it: http://alestic.com/2009/06/ec2-user-data-scripts
field is a free text field so for example it can contain:
- comma separated values for
- multi-part MIME format for
complex configurations, where each part, identified by a proper
content-type, is related to a specific customization.
launched instance can easily retrieve the user data field invoking the
predefined URL http://169.254.169.254/latest/user-data
and processes it according the needs.
It can be
achieved by exploiting the current integration between IBM
SmartCloud Provisioning and the Image Construction and Composition Tool
(ICCT), available in IBM SmartCloud
Provisioning version 1.2, creating a new bundle, the User-Data consumer bundle
that contains a script that retrieves the “user-data” and process it based on his
interesting scenario is the capability of passing directly one or more scripts
to be invoked at deployment time to have a really dynamic configuration. In
this way, a new image can be configured/customized at deployment time.
If you want
to have more information on user-data capabilities and examples take a look at
the Ubuntu cloud-init component described here https://help.ubuntu.com/community/CloudInit
information about IBM SmartCloud Provisioning and Image Construction and
Composition Tool see IBM SmartCloud Provisioning
Cloud systems have made a huge
improvement in terms of tracking and performance. In “Rapid deployments with
IBM Smart Cloud Provisioning” blog, we have shown that virtual machines or
appliances can be started and configured in a matter of seconds. It has never
been so easy to create a virtual machine (VM), install software, and configure
middleware. However, with great power comes great responsibility… it is now
possible to create a VM, but what is its lifecycle? Will it be destroyed after
being used, is the starting image deprecated, or is there a better starting
image given the needed configuration and software install requirements?
IBM SmartCloud Provisioning provides
a component called IBM Virtual Image Library (also known as IVIL) to solve
common issues that arise in large scale virtualized environments:
tracking: Where are my images? How old are they? How are they related?
and security: What is in the images? Are they secure? What is the software
Are there images redundancies? Is
there any difference between two images?
list goes on
VIL can be integrated simply into
your virtualization infrastructure; the only requirement to start using IVIL is
the credentials required to contact the virtualization infrastructure. No changes to your current virtualization
environment are required. After credentials
are provided, IVIL can automatically determine the provenance, state, and the
content of each virtual image or virtual machine in the virtualization
environment. After the environment is
registered you will have a clear picture
of your various images, their content, history, and similarity with one
another. More important, as soon as IVIL
is used in the infrastructure, it can be used to move the images from one
hypervisor vendor to another and keep track of these migrations. To summarize, IVIL
not only keeps track of the changes of an image on one hypervisor but continues
when images are in a heterogeneous environment.
A common solution to track the
contents and versioning of images is by use of a naming convention, for
example, a name such as RHEL_6.1_WebSphere7.1_v2.1 implies the image is Red Hat
Linux 6.1 with WebSphere 7.1 installed, and that this is version 2.1 of this
image. It is feasible to use this approach with a small number of images but
becomes cumbersome and confusing with anything but small examples. Basic information that is typically attempted
to be conveyed includes:
is the OS and OS version?
applications are installed and their versions?
the latest patches and updates installed?
does this image relate to other versions of the same or similar images?
Using an image naming convention
can work in some cases and provide some of the needed information but it does
not scale beyond a small number of simple images. To solve this, IVIL provides versioning and
provenance control to understand where an image comes from:
What is provenance? Simply put provenance tracks the history of the
image as it has evolved over time in the virtual environment. It tracks how the bits that make up the image
came to be – through IVIL checkout operations, image clone operations, image
copy operations, and so on. It is used
to understand the lineage of an image from the perspective of the virtual
system which might or might not match with how the user of IVIL views the
For example, let’s assume that you
have an image called “A”. If you decide to start this image on multiple
instances of IBM SmartCloud Provisioning or if you decide to clone this image
possibly multiple times, then IVIL will keep track of the relation between all
the created images and instances. At any time, if a security flaw is found on
A, then you can infer that the associated images and instances are likely affected
also. IVIL provides this functionality not only for a single virtual
environment, but across heterogeneous virtual environments also.
What is versioning? Versioning is the logical user-defined
lineage of an image or virtual appliance; it is the way a user would think of
versioning his or her image functionality, for example this is version 2 of my
AccountsPayableService virtual image.
When an image is available with a particular application version, the OS
and libraries behind are often not important, only the application is. Is it
important to know its template? Not necessarily, only the information about the
OS is relevant. However, it is good to know the application version and if
there is a newer version available for this image or if a new image has been
released with the latest security patches. This is the versioning system in IVIL;
it helps to understand if there are other versions of the application in the
infrastructure, if some applications contain a patch or not.
To summarize, provenance is
oriented to infrastructure administration whereas versioning is more oriented
towards applications and workloads.
For example, let’s assume that we
want to provide version 1.0 of software S as image. By default, users can
decide to use software S and trigger any instance of image A. At a certain
point, the version 1.0 is deprecated and we must upgrade software S to version
1.1. Unfortunately, the OS distribution must be upgraded. A solution is to
reinstall the OS from scratch and install S version 1.1 on it; this new image
will be called B. These images do not have any common lineage from a provenance
perspective, however the content has a logical lineage to the user. Image A is
the parent of image B from a versioning perspective.
It is important to understand that
an image can have only one provenance parent but can have multiple version
parents. The second claim makes sense because an image may have multiple
applications installed and thus each one may be associated to a logical
This concludes the introduction of
Virtual Image Library component in IBM SmartCloud Provisioning. Next time, I
will introduce the concept of similarity between images and the power that it
provides in terms of debugging, infrastructure consolidation, licensing cost, and
As customers consolidate and virtualize application workloads along their journey toward Cloud, the cost savings that they had envisioned often prove elusive. True efficiency comes from the ability to right-size both the environment and the virtual workloads - in response to actual performance data, rather than theoretical estimates – in order to create an optimized Cloud infrastructure that runs densely enough to provide true consolidation while maintaining application service levels and room for expansion. The migration to a Cloud infrastructure, where the physical resources that we're accustomed to monitoring have been "abstracted" into pools of virtual resources, presents us with a visibility problem. It's more difficult to tweak the knobs and turn the dials to make an individual server respond to our management needs. More importantly, any changes we make at the Cloud infrastructure level have the potential to dramatically affect other workloads and services.
Join us on February 16, 2012 for Simplify Cloud Management with IBM SmartCloud Monitoring, where Ben Stern will demonstrate how our latest infrastructure management offering can help a Cloud or virtualization administrator overcome those visibility hurdles, leveraging infrastructure monitoring, health dashboards performance and capacity analytics, and policy-driven optimization of workloads and their placement in the Cloud. Most customers want a Cloud monitoring product that can be plugged into their existing data center monitoring toolset, as part of an enterprise-proven, heterogeneous solution, providing continuity of historical data and preservation of skills. You'll hear how SmartCloud Monitoring has descended from the same IBM Tivoli Monitoring DNA running in the data centers of the world's largest corporations, and quickly discover that you already know more about SmartCloud Monitoring than you realized.
Ben Stern has spent over 20 years working in the IT industry in a variety of management and technical roles within the software development organization. Prior to his current role, he was the lead for the Tivoli Service Availability and Performance Management Best Practices team. In that role, he helped define best practices for the Tivoli portfolio while working with hundreds of customers around the world. In his current role, he is focusing on Tivoli's virtualization and cloud solutions.
Link to Register
Select the session that fits your schedule.
February 16th 2011 11:00 AM to Noon EST US and Canada (GMT -05:00)https://de202.centra.com:443/Reg/main/000000605ae4440134e542dc87007e8e/en_US
February 16th 2011 6:00 PM to 7:00 PM EST US and Canada (GMT -05:00)
With December's release of IBM SmartCloud Monitoring, Tivoli's venerable IBM Tivoli Monitoring product family, proven in data centers at the world's largest corporations, begins to adopt a "Cloud" posture. Sure, "Cloud" is a term bereft of a clear operational definition that we can apply at any given moment, and customers, analysts and vendors tend to bandy it about pretty freely these days. However, if we don't get too hung up what Cloud is or isn't, we can probably agree that it represents a migration from our traditional server-delivered infrastructure to one comprised of pooled computing resources shared by virtual workloads. Whether or not our customers are calling their virtualized environments "private clouds" today, and whether or not they've got a "cloud budget" that they're using for such initiatives, the fact that they're moving along the cloud maturity continuum at some pace seems inescapable, given IDC's assertion that we crossed the magical "50%" boundary last year, when half of all corporate workloads were running on virtual machines instead of physical ones.
If we're beginning to think in terms of clouds of pooled computing resources, it makes sense that we begin to deliver management solutions in the same way, right? If the server administrators, storage administrators and network administrators now report to a cloud administrator, we should begin to package solutions for those cloud administrators, combining multiple pieces of management technology into a single part number that customers can purchase and deploy. That's exactly what we've done with SmartCloud Monitoring. The discrete monitoring agents that are at the heart of IBM Tivoli Monitoring; OS monitors, application monitors, storage, etc., are as important as they ever were. Even though we're pooling those resources across virtual machines, we still have to monitor things like processes, CPU activity, IO throughput, and so on. We just need to add a layer on top of all that granular detail, so the cloud administrator can see, at a glance, what's healthy or unhealthy about his cloud environment, before drilling down into the nuts and bolts.
SmartCloud monitoring combines the VMware virtualization management features in ITM for Virtual Environments with virtual machine instance monitoring from ITM's operating system agents, to monitor a cloud infrastructure and the workloads running on it.
Our roadmap looks like an analyst's cloud maturity ladder, adding features such as automated provisioning, usage and accounting integration, and more detailed network monitoring, so our solution will "mature" along with the market, and customers' needs. See if the challenges along this ladder look like things that you or your customer have faced on their cloud journey, or are grappling with now. It's important to note that Tivoli has solutions that can be applied to each step, and for each problem. What SmartCloud promises is a way to bring those solutions together into more consumable bundles, tightly integrated together, to make cloud management simple to purchase and simple to deploy.
SmartCloud Monitoring delivers key capabilities for optimizing and maintaining a private cloud, including:
- Health dashboards, to provide an instant, consolidated glimpse into cloud health
- Topology views of the key interrelated components of the cloud
- Reports on the health trends of cloud components and workloads, powered by Cognos
- What-If capacity planning scenariosPolicy-Based optimization to put workloads where they’ll perform best, not just where they’ll fit
- Performance Analytics for right-sizing of virtual machines
- Integration with industry-leading Tivoli service management portfolio
Modern Cloud infrastructures are built leveraging thousands of highly distributed servers, used to provide services directly to customers over the Internet. The service provider has two extremely important objectives, which, unfortunately, are to some degree contrasting: a) ensure continuous availability of the Cloud service, and b) contain the cost of the infrastructure and administration (CAPEX and OPEX).
There are several factors that have an impact on the availability of services, mostly related to infrastructure failures. Failures are not only related to unrecoverable hardware outages, but also to recoverable OS or middleware failures.
Not so long ago, the most common approach to high availability was to assume one could deploy infrastructures with the highest Mean Time To Failure (MTTF) possible, which required expensive systems and assumed the possibility to write error-safe software applications. It was also assumed that some degree of down-time was acceptable, with vendors boasting of the number of 9's that they could support (e.g. 99.999% availability). In today's always-on Internet, any downtime of major services becomes headline news. The traditional approach is no longer applicable, and a new approach has to be considered.
Given the requirement to reduce infrastructure costs, service providers are using commodity hardware. Given also the requirement to reduce operational costs, hardware failures are commonly dealt with by directly replacing the failed component rather than manual debugging and recovery by skilled (and expensive) administrators. Thus, to maintain the objective of continuous availability of the service, the Cloud system must be built in order to expect failure of the underlying infrastructure, and not only for temporary periods but it must assume that components will disappear forever. This cannot be limited to only hardware components, as no matter how well a software element is tested, unexpected edge conditions will appear at some point-in-time. So, to guarantee continuous availability, a Cloud solution must also expect its own components to fail too.
Given that we are forced to expect failure, the high MTTF approach is no longer valid, and instead we have to increase availability by flipping the approach to minimizing Mean Time To Recovery (MTTR). The quicker the system can recover from failure, the higher the availability of the service will be. Given however that even a tiny percentage of downtime is no longer acceptable, we also need a means to maintain service availability during the recovery process. One way of doing this is through providing redundancy of all critical services within the Cloud solution.
SmartCloud Provisioning is designed according to the ROC principles, because it is based on a highly distributed, redundant and robust infrastructure, with near zero downtime, and automated recovery across heterogeneous platforms, and it does not require expensive systems, but can run on a relatively low-cost commodity infrastructure.
The key factors that allow SmartCloud Provisioning to be a low-touch and robust cloud infrastructure are the following:
the infrastructure is as stateless as possible: this avoids issues related to single points of failure
management agents are deployed on the physical nodes of the infrastructure (compute nodes and storage nodes) and are connected in a peer-to-peer network to form a self-monitoring and self-managing infrastructure
core services are redundant being deployed in clusters to tolerate individual faults
master images are replicated in multiple copies across the storage nodes in the storage cluster; this tolerates HW failures of the storage nodes in the cluster as well as network failures when accessing one copy of the image
hypervisor (compute) nodes are deployed via a stateless boot so that it becomes easier to re-deploy a failing hypervisor by simply rebooting it and getting a fresh new copy of the hypervisor image. This also allows easy deployment of new nodes if needed, to augment the capacity of the infrastructure
Let's consider some typical failure scenarios that can happen in a real environment and let's see how the SmartCloud Provisioning is designed to tolerate them and react appropriately.
First example is related to the management agents that are used by SmartCloud Provisioning to perform the standard provisioning operations.
Management agents are deployed on both the compute nodes and the storage nodes and are organized in dynamic hyerarchies, where a leader (manager) is dynamically elected. The leader is just the entry point for distributing the requests across the infrastructure and a coordinator of any operation, but this role does not imply any special information being associated with the agent itself (stateless infrastructure): any agent can be a leader.
All the agents have a watch-dog mechanism that is used to prevent, detect and correct failures; they also monitor each other in the neighborhood and can start simple actions to fix other agents issues.
So, if an agent fails, the watch-dog mechanism tries to restart it. If the watch-dog is not able to restart the agent, neighbours try some simple actions to restart the failing agent. If the agent cannot be restarted, the system keeps on working without that node, thanks to the redundant infrastructure.
If the failing agent was a leader, and it cannot be restarted, the managed agents can re-elect their leader dynamically, without losing any information.
Another example is related to failures either in a storage node or in a compute node.
If a storage node fails, thanks to the redundant deployment and to the multiple copies of the same image available in the storage cluster, the deployment of VMs can continue without issues, and the leader agent will try to restart the failing node.
If a compute node fails, the leader detects the failures and stops sending requests to that node. Moreover it tries to restart the node, forcing a fresh copy of the compute node to be re-deployed via PXE boot.
If you're interested in trying the SmartCloud Provisioning product, you can download a trial version from the following link:
With the barrage of cloud news constantly hitting the market, it can be challenging for organizations to differentiate between all of the solutions and capabilities out there.
But with the latest cloud offering from IBM, the value proposition is quite simple—you get a low-cost, low-risk entry to cloud computing with compelling features. This is especially important for organizations who are still trying to leverage the cost savings of virtualization.
Our customers have told us they’re looking to cloud computing to increase agility—the ability of IT to evolve and meet business needs—and they’re looking for ways to control expenses related to IT investments. They also want to reduce IT complexity while at the same time increase utilization, reliability and scalability of IT resources. And they are looking for the ability to expand capabilities gradually, as their needs change and grow.
In designing a solution to meet all of these needs, we developed IBM SmartCloud Provisioning. Using industry best practices for cloud deployment and management, this new solution allows organizations to quickly deploy cloud resources with automated provisioning, parallel scalability and integrated fault tolerance to increase operational efficiency and respond to user needs.
The name doesn’t tell the whole story though. IBM SmartCloud Provisioning is a full-featured solution wrapped up in an easy-to-implement package. That means you get:
- Rapidly scalable deployment designed to meet business growth
- Reliable, non-stop cloud capable of automatically tolerating and recovering from software and hardware failures
- Reduced complexity through ease of use and improve time to value
- Reduced IT labor resources with self-service requesting and highly automated operations
- Control over image sprawl and reduced business risk through rich analytics, image versioning and federated image library features
Using this technology, we’ve seen customers get a cloud up and running in just hours—realizing immediate time to value. It’s fast—administrators have been able to go from bare metal to ready-for-work in under five minutes, or start a single VM and load OS in under 10 seconds, or scale up to 50,000 VMs in an hour (50 nodes).
But ultimately, these IT benefits have translated to business benefits—customers have been able to see how cloud computing can impact their business, and how they can accelerate the delivery of new services to drive revenue.
With the new release of IBM SmartCloud Provisioning this week, you can try and see firsthand the potential of this breakthrough technology to accelerate your journey to cloud. And if you want a preview of what’s in development, you can join our Open Beta program for access to beta-level code.
Starting from December 9th 2011 IBM SmartCloud Provisioning 1.2 is available for download.
The key features introduced in this release are:
Full product install through an interactive tool:
IBM SmartCloud Provisioning can now be installed using a graphical
wizard. Two flavours of the installer minimal and custom. The custom
installation allows to specify the number of instances of HBASE and
Zookeeper to be deployed. Moreover it allows to automatically configure
ESXi servers as compute node. The creation of the management virtual
image on VMware is automated.
Support for multiple networks:
you can now deploy images with more than one NIC. Different users can deploy images in segregated networks.
Integration of the Image Construction and Composition Tool:
The Image Construction and Composition Tool
helps building and customizing master images. It is designed to
facilitate a separation of concern and tasks, where experts build software bundles for reuse by others. This design approach greatly simplifies the complexity of virtual image creation and reduces errors.
Support of Open Virualization Format (OVF):
OVF images that can be created or modified by the Image Construction and Composition Tool
OVF metadata can be displayed and modified in the Self Service UI
Integration of the Virtual Image Library component:
The Virtual Image Library helps managing the life cycle of virtual images:
-Search images for specific software products
-Compare two images and determine the differences in files and products
-Find similar images
-Track image versions and provenance
The cloud administrator can use a brand new UI to perform tasks like
registering images, registering networks, managing quotas, assigning
roles, managing elastic IPs
The IBM® Image Construction and Composition Tool is a web application that simplifies and automates virtual image creation for public and private cloud environments, shielding the differences in cloud implementations from its users.
This white paper provides Software Specialists and other product experts with helpful tips and techniques to plan, design, and create software bundles in the Image Construction and CompositionTool.
Exciting news!! We announced this week the upcoming availability of IBM Tivoli Monitoring for Virtual Environments v7.1 (formerly known as ITM for Virtual Servers). Why did we change the name? Previously, our focus was on ensuring the health of the virtual server environment - VMs & hosts and virtual storage and network elements like data store capacity, etc. With this release, we are now focused across the virtual environment to include physical network and storage performance, thus, providing a holistic view of all physical and virtual shared resources across the virtual environment. This offering will be generally available November, 23rd. Enhanced capabilities include:
- New capacity planning reports for recommendations on workload
placement, highlighting potential energy and server costs savings while
adhering to co-location policies. You can now use benchmarking data,
results simulation, and a policy framework to more intelligently assess
where workloads should be placed, instead of relying solely on resource
availability in the virtual host farm.
- Busy administrators can make rapid assessments of server, storage,
network components, showing physical and virtual performance, and
change history via default settings via a new Web 2.0 dashboard.
- Diversified virtualization investments can extend Tivoli virtual environment performance and
availability monitoring to Citrix XenApp and XenDesktop via newly
- If you have invested in the Cisco Unified Computing System (UCS)
platform, you can now monitor performance and availability attributes
of UCS systems, including chassis and blade health, network fabric
health, and storage management integration.
Check out the official announcement:
I recently found this article which discusses the rationale for cloud adoption: http://www.tmcnet.com/usubmit/2011/10/31/5893685.htm. One factor listed is capacity management - "Users are considering cloud for capacity
management issues including periodic demand peaks and better management
of data center growth, power, and cooling issues." This statement speaks to the maturity of clients surveyed who are considering cloud in that they have the visibility into their virtual environment to understand workload usage trends such as predicting peaks and projecting growth of data center resource consumption. In other words, before clients can leverage cloud for adding capacity for periodic demanding peaks, they first must have capabilities in place for visibility of existing infrastructure. I am curious to understand if any of those surveyed have optimized their virtual environment; meaning, have they rightsized their workloads and placed workloads in a way to maximize available capacity.
Would you like to show and charge
for usage of your IBM Power Systems server?
You may already be aware of the concept
of a virtualized system and virtual machines. This might be used by your organization as a means to share physical resources or form the basis for your cloud infrastructure. The usual goal of virtualization is to
centralize administrative tasks while improving scalability and work
loads. The question is how do you analyze the usage of such resources
and charge appropriately where required?
The Tivoli Usage and Accounting Manager
(TUAM) team is pleased to announce that the TUAM IBM Hardware
Management Console (HMC) collector also supports collecting usage
information from IBM Systems Director Management Console (SDMC) and facilitates analyzing, reporting, and billing based on the
usage and costs of this metering data. This provides a
means for enterprises to migrate from HMC to SDMC and ensure
continuity of showback/chargeback solutions based on TUAM. Future versions of the HMC/SDMC
collector will exploit SDMC specific features.
Capabilities of the collector include
- Ability to capture allocation (entitlements) and usage information for each LPAR, Processor Pool, Memory Pool and the overall System
- Ability to capture capped and uncapped usage and charge different amounts for each
What is IBM Systems Director
Management Console (SDMC)?
The SDMC provides hardware, service,
and virtualization management for your Power Systems server.
The SDMC is the successor to the HMC and the Integrated
Virtualization Manager (IVM), and shows how IBM Systems Director is
going to take an increasingly important role for administrators. For
more information on SDMC, see this blog.
For more information about the IBM
PowerVM HMC data collector, see the TUAM
7.3 Information Center. The collector is available as part of
the TUAM 7.3.0 Enterprise Edition Base Collector Pack.
Unlock the Value of Virtualization with Integrated Service Management Whitepaper