Modificado el por b1stern
I wanted to let everyone know that a Trial Virtual Machine is available for the SmartCloud Monitoring version 7.2 FP1 product. The Trial provides a 90 day trial of the software to monitor your virtualized environment and includes the Capacity Planning tools for VMware and PowerVM. These tools can help you optimize your virtualized environment and save money.
Within a few hours you can have the Virtual Machine up and running and monitoring your Virtualized environment.
This is a great tool if you are working with a customer on a Proof of Concept. Or, if you are customer, it is a really quick and easy way to evaluate the software.
The Trial includes the SmartCloud Monitoring product plus a little bit of extra content. It includes monitoring for:
PowerVM including (OS, VIOS, CEC, and HMC)
Citrix XenApp, XenDesktop, XenServer
Log File Monitoring
Agent Based and Agent-less Operating System monitoring
Integration with Tivoli Storage Productivity Center
Integration with IBM Systems Director
The trial also includes Predictive Analytics, Capacity Planning and Optimization for VMware and PowerVM
You can find the software at the following URL: https://www.ibm.com/services/forms/preLogin.do?source=swg-ibmscmpcvi2
If you have any questions or need assistance, you can send me an email at email@example.com
Is Cloud the Next Utility, hmmm there's an interesting thought. I know many of us are still trying to define "what is Cloud" and "what does Cloud mean for my business." I spend my days in IBM's Cloud & Smarter Infrastructure team working on bringing "Cloud Solutions" to our customers. This turns out to be simply helping customers use what they have, maybe with a little twist, to build out their data center in a more robust, efficient manner. It's all about helping them meet the needs of their business. IT teams have been doing derivations of "do more with less" for years and as technology matures, specifically with virtualization and the management of virtual environments, IT teams are able to improve their Quality of Service, reduce their Total Cost of Ownership, while increasing the services they offer their consumers. How?, you ask. IT managers everywhere are trying to figure out how to provide their services in a Utility-like format utilizing "the Cloud." Think about how natural it is for us to expect that flipping on a light switch, magically makes electricity flows and the light comes on; how about if we could select a service, magically systems are provisioned, business processes are established and the service is available - would that provide business value? I'll use this blog post to start you thinking about "Cloud as the Next Utility" and get you wondering if we do indeed have front row seats to a Cloud Computing Revolution.
Should get I my electricity from the local power company or do I invest in sustaining my own private electricity? As individuals focus on a greener way of life, they may be asking themselves that question as they focus on using what they have while minimizing their own carbon footprint. People have options that they can ponder over to figure out what is best for their families, their life and their utility usage.
Organizations are iterating over multiple decision points to provide the growing IT infrastructure requirements to run their business and it is continually influenced by many different things. Do they already have a large IT investment? Do their IT requirements expand and contract depending on current activities? Do they have a diverse investment and skills on existing service management tools? It's apparent that these same questions can be asked regardless of the size of your IT shop - big or small companies have come to depend on IT services and this dependency continues to grow in this computing revolution. One direction an IT manager might take is a hybrid approach. Such a strategy allows them to utilize their current investment of infrastructure, tools and skills to build out their own Private Cloud while keeping a Public Cloud vendor accessible when their computing needs spike. Companies have options that they can ponder over to figure out what is best for their company's business, their IT environments and their utility usage.
How is it that when I buy a new appliance I don't even think about if it'll will work with my electrical outlets when I get home. Standardization on how electricity is delivered and consumed is ubiquitous and expected worldwide. As consumers, we buy a utility and have an expectation that it will work with all our appliances to meet our needs.
Most companies have already made large investments in both infrastructure and tools, as well as the skills to maintain them. The need to deliver more with what is available, requires technology that can bridge from existing environments to new environments regardless of the influences. By adopting and adhering to computing standards, tooling can utilize data from existing systems while meeting the needs and different delivery methodologies of the business in new ways. This means I can ease the coordination of complex tasks and workflows in my data center while leveraging the existing skills, processes and technology artifacts to deliver services in new, different ways. By employing computing standards in my data center strategy, I can be assured that I can rely on Public Cloud for my processing spikes; I can utilize both my older infrastructure while integrating in new infrastructure; all with the same extensible tooling for deployment and management of compute, storage and networking. As you're defining the next milestones in your cloud strategy/roadmap, I would encourage you to investigate what is in use today in your data center and how you need to evolve/ reuse that investment. IBM offers SmartCloud Orchestrator to manage your existing datacenter(s). It provides capabilities based on industry standards and can extend your current processes to Cloud, helping you bring Cloud Services to your users with the same self-service manner as flipping on a light switch or plugging in a new appliance. A utility that will work to meet your users needs.
Until next time, keep your head in the Clouds.
The challenges of
virtualized environments are driving the shift to greater integration of
service management capabilities such as image and patch management, high-scale
provisioning, monitoring, storage and security. Join us for this webcast to learn how
organizations can realize the full benefits of virtualization to reduce
management costs, decrease deployment time, increase visibility into
performance and maximize utilization.
If you're in North America, register here for the April 16th session:
If you're in Asia Pacific, register for the April 23rd session:
Determine the right cloud orchestration strategy to
address the unique needs and pain points of your organization while increasing
productivity and spurring innovation. And learn more about the recently announced orchestration capabilities from IBM that leverage OpenStack to manage heterogeneous hybrid environments. Sign up today!
Even if you
weren’t at IBM Pulse, trending right now on the web is chatter about IBM’s
announcement to leverage open technologies pervasively in the development
of its cloud offerings.
SmartCloud Orchestrator—an integrated platform to standardize and manage
heterogeneous hybrid environments—IBM is launching its first commercial offering
based on OpenStack. And with SmartCloud Orchestrator, IBM is also redefining
the scope of orchestration
to encompass the streamlining and integration of all resources, workloads and
for this kind of capability is addressed in the latest IDC
report which discusses why it will become a priority as organizations look
to improve operational efficiency and reduce the mess and complexity of growing
to standardize and automate cloud services includes integrating performance and
capacity management, usage and accounting, and rich image lifecycle management.
In addition, services and tasks such as compute and storage provisioning,
configuration of network devices, integration with service request and change
management systems and processes can all be streamlined. Out-of-the-box robust
workload patterns also enable fast development of cloud services.
SmartCloud Orchestrator, it’s all brought together to seamlessly manage
heterogeneous environments, allowing organizations to build on existing
investments and open source technologies.
haven’t had time to catch up on what’s trending, here’s the short version on
how IBM is helping to advance
the cloud to drive innovation.
TUC Webcast: Maximize the Benefits of Virtualization for Greater ROI
Please join the Tivoli User Community
for a live Webcast and opportunity for Q&A, Thursday, February 21, 2013 at 11:00 AM ET USA.
Click here to reserve your webcast seat now
About this Webcast:
Challenges with virtualized environments are driving the shift to
increased integration of service management capabilities such as image
and patch management, high-scale provisioning, monitoring, storage and
security. In this webcast, learn how organizations can realize the full
benefits of virtualization including reduced management costs, decreased
deployment time, increased visibility into performance and maximized
About the Speaker:
Matthew Rodkey, Product Manager focusing on Tivoli Cloud Solutions
Matt Rodkey is a Product Manager focusing on Tivoli Cloud Solutions.
In 13 years with IBM, Matt has worked in a number of areas in the Tivoli
portfolio including Security, Monitoring, and Service Delivery.
The Official Tivoli User Community is the largest online and offline
organization of Tivoli professionals in the world – home to over 160 local User
Communities and dozens of virtual/global groups from 29 countries – with more
than 26,000 members. The TUC community offers Users blogs and forums for
discussion and collaboration, access to the latest whitepapers, webinars,
presentations and research for Users, by Users and the latest information on
Tivoli products. The Tivoli User Community offers the opportunity to
learn and collaborate on the latest topics and issues that matter most.
Membership is complimentary. Join NOW!
Please join the Tivoli User
Community for their next webcast on IBM Smartcloud Consumer Monitoring. This webcast will be held on
Tuesday, January 22, 2013 at 11:00 AM ET, USA.
IBM SmartCloud Consumer Monitoring is a new product
developed for cloud consumers and service providers. An innovative new architecture embeds
monitoring technology in library images, so newly deployed VMs are discovered
and monitored within seconds of being instantiated. “Fabric Nodes” used innovative distributed
database technology to display data for nodes and applications, or logical
groupings of them, and run as virtual machines alongside the application
VMs. New fabric nodes come online as
needed, and shut themselves down when no longer needed, ensuring optimum use of
ABOUT THE SPEAKER:
Ben Stern, Executive IT Specialist, IBM Cloud & Virtualization products
Stern is an Executive IT Specialist.
For the past several years, he has defined Best Practices for Tivoli's
SAPM portfolio. Most recently, he has
taken on Best Practices role for the Cloud and Virtualization products.
The Official Tivoli User Community is the largest online and offline
organization of Tivoli professionals in the world – home to over 160 local User
Communities and dozens of virtual/global groups from 29 countries – with more
than 26,000 members. The TUC community offers Users blogs and forums for
discussion and collaboration, access to the latest whitepapers, webinars, presentations
and research for Users, by Users and the latest information on Tivoli
products. The Tivoli User Community offers the opportunity to learn and
collaborate on the latest topics and issues that matter most. Membership
is complimentary. Join NOW!
1. On VM console menu navigate to VM > Guest > Install /upgrade VMware Tools
2. open the VMware installables folder in terminal. Run command rpm -i VMwareTools-4.0.0-261974.tar.gz
3. Run /usr/bin/vmware-config-tools.pl
The steps mentioned above usually work fine for 64 bit OS. However today I had to create a 32 bit RHEL6.1 OS. Faced couple of issues starting from :-
1. gcc not installed. Install the OS complaint rpm.
2. Kernel headers files not found in /usr/include. After a little googling, I found the solution to this issue.
i. run command uname -r
ii. install rpm -ivh kernel-devel-<of the version found in command above>
iii. ls -d /usr/src/kernels/$(uname -r)*/include gives us the kernel header files path which we can then feed to vmware-config-tools.pl prompt.
Even though server proliferation can be partially addressed through virtualization, the usage of virtual and physical assets becomes complex to accurately assess or manage. Cost management is crucial to integrate into overall service management, especially with a move into cloud. This webcast discusses how to implement a financial management roadmap and the key requirements for cloud transparency-- the ability to allocate IT costs, usage, and value.
My three favorite things about OpenStack are
- The People
- The Innovation
- The Interoperability
San Diego was my second OpenStack summit. Many of the same faces were in the design
summit sessions I attended, but there were many new faces as well. One of the most exciting observations from
the Folsom design summit was the incredible talent pool assembled. The Grizzly summit was no different – it’s
great to interact with so many incredibly smart, deep and experienced
people. I’m convinced that a single
company could never amass such a collection of quality talent for one
project. I guess it’s no wonder they’re
saying OpenStack is the fastest growing open source project ever.
I must apologize in advance, because I am sure to miss
someone, but I want to tell you about some of the people I interacted with in
the nova, glance, and cinder design sessions.
Over the past few months I’ve really been impressed with the PTL
leads. They’re very smart, highly
motivated, and excellent facilitators.
The design sessions invariably get into open debate, but productive
debate. I was impressed with the PTLs’
natural abilities to channel the discussion to bring out the key issues and
land on some concrete next steps.
I got to meet Microsoft’s Peter Pouliot, who’s heroic and
tenacious efforts successfully delivered HyperV support after a rather dodgey mess earlier in the year. Peter is not your stereotypical Microsoft
developer. He’s an open source guy
through and through. It’s clear that his
personal spirit had a lot to do with corralling the community to deliver
quality code in a very short time frame.
It was great to meet Peter and some of his non-Microsoft
colaborators. Great job guys!
I also had the pleasure to meet with some of VMware’s
developers and not just those acquired via the billion dollar Nicira
acquisition. The Nicira guys are great –
no question but I was also very pleased to meet the VMware developer who
completely rewrote the less than adequate VMware compute driver. I hope to work closely with them to ensure
the hypervisor is well supported and as interoperable as possible with other
properietary and open source technologies.
Of course, I can’t speak of OpenStackers without mentioning
RackSpace. Over the past two summits, I
got to interact with a number of RackSpace developers, aka Rackers. I got to hand it to them, they really do
have a great bunch of people and definitely
bring a massive scale service provider
perspective to the discussion. Of
course, being an IBMer myself, I can’t help but bring the enterprise customer
perspective into the mix. I think
OpenStack benefits greatly from these two perspectives brought together in open
OpenStack has done a great job defining an extensible
framework for IaaS. This flexibility not
only helps accommodate the varied needs from enterprise to service provider,
but it also enables a massive sea of innovation. Since the Nicira acquisition there’s been a
lot of attention on the innovation around software defined networking and
quantum, the OpenStack project that provides the abstraction for a variety of
implementations ranging from proprietary,
to pure open source like Open V Switch, to traditional standard
networking equipment. I think storage is
even hotter than networking these days with a slew of vendors combining
commodity 10Ge switches with commodity Intel servers with a combination of SSDs
and spinning disks to provide new approaches to storage for virtualized
environments. Of course software plays a
critical role in many of these virtualized storage solutions. Dreamhost’s open source distributed file system Ceph has been getting a lot of interest.
Enterprise storage vendors like NetApp, IBM, and HP have also
contributed cinder drivers to support their products within OpenStack
clouds. There were also a number of
summit discussions about exposing the different backend implementations of the
abstractions with different qualities of service. Some people, including one of my developers,
have begun to use “Volume Types” as a way to let users choose the kinds of
volumes they need. I believe this is
critical for compute clouds to cover the broadest spectrum of workloads. Of course this principle applies to other
resources and not just cinder volumes.
I saw a lightning talk about a nova driver for smartOS, a cool open source project from Joyent combining solaris zones, zfs, and kvm. There were ARM and Power CPU
support presented as well as a couple bare metal solutions. Intel, KVM, and OpenStack certainly make a
nice combination, but there’s so much more that’s possible with OpenStack
Finally, perhaps the most important thing about an OpenStack
cloud is interoperability. Starting with
the hypervisor, IBM has a solution that enables interoperability of images,
volumes, and networks across Xen, KVM, VMware and Hyper-V. We had a few sessions where we
discussed how we can bring the same interoperability to OpenStack. To start with, we need to be able to register
readonly cinder volumes as glance images.
Next to ensure we can scale out we need to be able to register multiple copies of the same image.
Finally, to take advantage of performance we need to abstract the clone
operation to enable Copy on Write (CoW), Copy on Read (CoR), as well as the
current local cache plus CoW mechanism for backwards compatibility and to
support 1Ge networks. Combining these
will enable images to work across multiple different hypervisors.
We also need interoperability with existing images which
means VMware and Amazon as the two most common forms of images. Today, it’s quite easy to automate simple
image formatting differences, but the challenge is in the assumptions made by
the images. The current direction for
OpenStack is to use config drive v2 to pass instance metadata to the
guest which is responsible to pull key system configuration such as hostnames,
credentials, and IP addresses. Typical VMware
images on the other hand generally expect either a push model, where the
hypervisor manipulates the filesystem prior to booting the image, or via their
guest agent, VMware tools.
To make matters worse, the current OpenStack assumes
different image formats for each supported hypervisor. One of the sad punchlines from Troy Torman’s keynote was that RackSpace’s private cloud distro named Alamo does not interoperate
with their publci cloud even though they’re both OpenStack. The good news is that, as Troy went on to
say, the time has come to focus on interoperability.
I got into a great conversation with Jesse Andrews, one of
the original OpenStack guys now at Nebula.
He described the approach to image interoperability by enabling cloud
operators to provide custom image workers at image ingestion time. This enables cloud providers to register
custom image processing code that gets called whenever an image is uploaded to
Glance. The simplest case of this is to
convert image formats to enable Alamo KVM images to run on RackSpace’s Xen
based public cloud.
Fortunately, IBM’s SmartCloud Provisioning (SCP) includes
some image management technologies which can help with the more challenging
problems mentioned above. Today’s SCP
2.1 will interrogate images in the library and check for cross hypervisor
compatibility. Users gain visibility
into this information and can optionally automate fixes wherever possible. We also use this technique to detect the
presence of a critical guest agent.
This brings me to one of my favorite little open source
projects, cloud-init created by Scot Moser at Canonical. If only it wasn’t GPL ;-). Many OpenStackers are using cloud-init to
automate the system configuration pull from config drive v2 mentioned above. This little bootstrap can do much much more,
but this is certainly a great job for this trusty little tool. Unfortunately, it’s only for linux. It’s even been made to work with fedora and
will likely be included in RHEL. Since
we cannot use GPL code in IBM products, we have a similar bootstrap for both
Windows and Linux guests. We’re working
with our lawyers to get approval to contribute this code to cloud-init. Of course,
if Canonical wants to use a more commercial friendly license like
OpenStack has done, then I could spend less time with lawyers and more time
hacking code ;-).
The beauty of this little bootstrap is its simplicity. This simplicity enables us to automatically
inject the bootstrap into Windows and Linux images. This will let us automatically fixup any old VMware,
or Hyper-V image so that it works on OpenStack.
This is a critical first step towards interoperability.
OpenStack is truly becoming an industry changing
and historic project. With so many
incredibly talented people from countless companies across the globe it’s no
wonder there is so much innovation in the community. I’m really happy to be a part of this growing
community. Together I believe we can
change the industry for the better. If
you would like to be part of this growing and innovative project, check out the
“community” link at www.openstack.org.
Also, we would like to invite you to check back here for future blogs on
OpenStack and IBM’s involvement. OpenStack
is a big part of IBM’s open cloud strategy and we want to be sure to keep you
up to date on our progress
Was out for a few drinks last week with friends from the
local tech community where we try to solve the world’s problems. We were enjoying sitting in the sunshine and
warm weather on the patio of a local brewery discussing the interesting topics
of the week. Before we got into the
presidential election the talk turned to virtualization, of course, and the
current directions their companies are taking.
Everyone has heard about the pricing actions from VMware causing major
concerns from their customers, but these topics to me always seem some what
abstract until you hear about people taking action. Well, it turns out the guy who works at a
large tech company in Austin
was actually in the process of installing KVM to replace VMware in his area of
the company due to pricing issues. He
went on to say that a CIO friend of his at another company was in the process
of doing the same thing. That’s the
thing about IT people, once you upset them they can take decisive action to fix
the problem and they tend to have very long memories. I also came across this CNET
article sub-titled “Market research firm IDC says that data from a new survey
shows that "open cloud is key for 72 percent of customers." Clearly there seems to be a pricing level
that makes open cloud solutions look very attractive. As more companies try to balance the need for
virtualization and cloud with the costs of the solutions, I think the open
cloud will become more attractive.
IBM is working with OpenStack which is a global collaboration
of developers and cloud computing technologists producing the ubiquitous open
source cloud computing platform for public and private clouds. The OpenStack Summit, which is sold out, is
going on this week (October 15th) in San Diego, you can read about it and other
topics at the OpenStack blog. There are also some great insights on the
topic at the Linux Foundation blog from IBM’s
Angel Diaz: 3 Projects creating user-driven standards for the open cloud . I think this is definitely a space worth
watching…stay tuned for more updates from the patio…
It’s been estimated that the number of virtual machines in data centers has increased at least tenfold in the last decade. More than fifty percent of virtualized environments now have more than one brand of hypervisor. The hypervisor promise of cutting infrastructure expense has given way to increases in licensing costs of more than three hundred percent. And the average number of images destroyed? Nobody knows.
In short, the challenges of managing virtualized environments are mounting. The benefits of virtualization—from cost and labor savings to increased efficiency—are being threatened by its staggering growth and the resultant complexity.
A critical piece to solving these challenges, as many organizations have already discovered, is image management. While there are many ad hoc and isolated solutions, there continues to be a real need for comprehensive image lifecycle management to combat image sprawl, get more visibility and analysis into where images are stored and how they are being used, and to ensure security through timely patching of images. This doesn’t necessarily mean jumping to cloud solutions, especially for businesses that aren’t ready to adopt cloud orchestration
yet, but rather, implementing image capabilities in the virtualized environment that are robust enough to help with high-value applications and the on-ramp to advanced cloud capabilities.
Simplifying image creation and deployment
Because images are easy to create and copy, it’s often difficult to decipher which images are crucial, where there is redundancy and where there may be a need for more governance. It is also an ongoing challenge to understand what an image consists of without launching it. This image complexity has resulted in IT spending a significant portion of their time on mundane or repetitive tasks such as manually building images and maintaining an image library.
Inserting automation best practices into the process of creating, deploying and managing images can result in immediate time and labor savings, with as much as 40-80% labor cost reduction by increasing image/admin ratio efficiency. Automation also helps to optimize the efficiency and accuracy of service delivery in the data center.
Once images are captured they can be deployed as often as needed. Paired with robust, automated, high-scale provisioning
, hundreds of new virtual machines can be deployed in minutes, increasing IT efficiency. They can also be customized based on user needs.
Improving image analysis
Key to effective image analysis (including image search, drift, version control and image vulnerability) is the use of a federated image library, which pulls together the storage and meta information of images across multiple image repositories and hypervisors.Image search:
With a large amount of image information to contain and understand, it can become difficult to determine the connection between images or their origin. A family-tree hierarchy and grouping of images with version chains simplifies image search by showing how images are linked, when they are in use and where they originated, even in a mixed hypervisor environment. Additionally, searching capabilities within images drastically reduces the complexity of finding the right image and associated information about it.Image drift:
Varying image iterations make it difficult to manage compliance and version control. Frequently, administrators are forced to maintain volumes of duplicate and unnecessary images because it is difficult to ascertain the need, use or ownership of images. Advanced image management can increase visibility into what is inside a virtual machine through a centralized image library, to determine opportunities to consolidate images, or determine if there are security threats from vulnerable images.
Increasing security with image patching
With the explosion of images to govern, there is a need to be able to detect vulnerability exposures in images to ensure that no virtual machines are created without the proper level of security patches. All systems, both physical and virtual, need to be patched whether they are distributed or part of the cloud. A simplified, automated patching process can administer virtual images from a single console so you have the scalability to patch as quickly as you can provision
, allowing users to maintain golden and copied images in a patched state. With this patching capability, policy enforcement can be accomplished and proven in minutes instead of days, and IT can increase the accuracy and speed of patching enforcement, achieving as much as 98% first pass patch success rate in hours.
The benefits of a comprehensive, integrated image management solution are immediately obvious. Best of all, there is a high degree of reward with very little risk.
And with image sprawl under control, organizations can expand capabilities for richer end-to-end service management across the virtualized infrastructure such as performance management and data protection as well as look to higher value cloud capabilities for faster service delivery.
Join us on
the Tivoli User Community webcast and opportunity for questions,
Tuesday, September 25th at 11:00 AM, ET, USA
Reserve Your Webcast Seat Now
While virtualization can produce significant cost savings as a result of
reducing infrastructure overhead, it does not address the single-largest cost
element for most data centers—the labor to manage this environment—which can be
as high as 40 percent of the overall cost. If not controlled, management costs
can negate the cost savings realized through virtualization. Virtualization
enables mobility of systems and flexible deployment and re-deployment of
systems. Manually tracking software stacks and configurations of VMs and images
becomes increasingly difficult and there is a need for provisioning automation
webcast, learn how to accelerate application deployment and increase business
agility by leveraging virtualization and building simple, scalable cloud.
Speaker: Matt Rodkey, Product Manager, Cloud Mgmt.
Rodkey is a Product Manager focusing on Tivoli Cloud Solutions. In 13 years with IBM, Matt has worked in a
number of areas in the Tivoli portfolio including Security, Monitoring, and
Service Delivery. Click Here to view his TUC Profile
Official Tivoli User Community is the largest online and offline organization
of Tivoli professionals in the world – home to over 160 local User Communities
and dozens of virtual/global groups from 29 countries – with more than 26,000
members. The TUC community offers Users
blogs and forums for discussion and collaboration, access to the latest
whitepapers, webinars, presentations and research for Users, by Users and the
latest information on Tivoli products.
The Tivoli User Community offers the opportunity to learn and
collaborate on the latest topics and issues that matter most. Membership is complimentary. Join NOW!
Modificado el por cynthyap
Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.
But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”
The need to orchestrate really becomes clear when various aspects of cloud management are brought together. The value to the organization at this stage of cloud is simplifying the management of automation – otherwise a balancing act to manage multiple hypervisors, resource usage, availability, scalability, performance and more -- based on business needs from the cloud, with the ultimate goal of delivering services faster.
With orchestration, the pieces are woven together and can be managed more effectively to ensure smooth and rapid service delivery -- and delivered in a user-friendly catalog of services easily accessible through a single pane of glass. In essence, cloud orchestration = automation + integration + best practices.
Without cloud orchestration, it’s difficult to realize the full benefits of cloud computing. The stitching together of best practices and automated tasks and processes becomes essential to optimize a wide spectrum of workloads types.
In addition to rapid service delivery, the benefit of orchestration is that there can be significant cost savings associated with labor and resources by eliminating manual intervention and management of varied IT resources or services.
Some key traits of cloud orchestration include:
• Integration of cloud capabilities across heterogeneous environments and infrastructures to simplify, automate and optimize service deployment
• Self-service portal for selection of cloud services, including storage and networking, from a predefined menu of offerings
• Reduced need for intervention to allow lower ratio of administrators to physical and virtual servers
• Automated high-scale provisioning and de-provisioning of resources with policy-based tools to manage virtual machine sprawl by reclaiming resources automatically
• Ability to integrate workflows and approval chains across technology silos to improve collaboration and reduce delays
• Real-time monitoring of physical and virtual cloud resources, as well as usage and accounting chargeback capabilities to track and optimize system usage
• Prepackaged automation templates and workflows for most common resource types to ease adoption of best practices and minimize transition time
In short, many of the capabilities that we associate with cloud computing are really elements of orchestration. In an orchestrated environment, organizations gain tools to manage their cloud workloads through a single interface, providing greater efficiency, control and scalability. As cloud environments become more complex and organizations seek greater benefit from their computing resources, the need for sophisticated management solutions that can orchestrate across the entire environment will become ever clearer.
Learn more about how cloud orchestration capabilities can help your business. And join the Cloud Provisioning and Orchestration development community to test out the latest cloud solutions and provide feedback to impact development.
With the proliferation of cloud computing, many businesses are starting to adopt a service provider model—either as a deliberate strategy to establish new revenue streams or, in some cases, inadvertently to support the growing needs of their organizations. This is especially true for companies with diverse needs, whether they’re tech companies with dev teams churning out new apps and services, or business owners driving requirements for SaaS services and cloud capabilities to enhance their data center operations.
In any event, the distinction between managed service providers (MSP) or cloud service providers (CSP), and companies growing in-house capabilities may not be as important as the common need to respond quickly and scale to support customer needs. The challenges facing all of these companies include facilitating the creation of new applications and services while maintaining quality of service, and the need for automation to reduce human resources and error from manual tasks—all with an eye to drive revenue and acquire new customers.
And so, the challenge for service providers of any kind is to increase scalability, automation and uptime while constraining costs. Companies are increasingly solving the critical piece of this puzzle by embracing rapid, high-scale provisioning and key cloud management capabilities to allow them to grow as quickly as their customers’ needs. In particular, the benefits accrue in four key areas.
First, applications can be deployed rapidly across private and public cloud resources.
Second, rich image management tools simplify complex and time consuming processes for creating virtual images and constraining image sprawl.
Third, operational costs can be lowered by leveraging existing hardware to support an array of virtual servers and diverse hypervisors.
And fourth, high-scale provisioning enables rapid response to changing business needs with near-instant deployment of hundreds of virtual machines.
While the spectrum of virtualization to orchestration functionality helps to manage their environments, high-scale provisioning in particular offers a cost-effective way to leverage capacity as a business commodity—a way for service providers to offer seemingly limitless capacity to their customers while lowering the relative costs of providing it.
In the case of Dutch Cloud, a CSP based in the Netherlands, a growing client base allowed the company to expand but it was very conscious of the costs and issues related to scalability, performance and security. By adopting a lightweight, high-scale provisioning solution for core service delivery, Dutch Cloud added capacity easily and was able to scale up rapidly without interruption to customer service. The CSP also reduced its administrative workload by 70 percent by adopting automation best practices. Monthly revenue has tripled twice in the last six months without an increase in operational costs.
Other service providers such as SLTN, a systems integrator serving large and mid-sized businesses, have experienced similar cost savings by extending platform managed services to a cloud delivery model. By implementing a low-touch, highly scalable cloud as its core delivery platform across multiple compute and storage nodes, SLTN was able to deploy new services in seconds rather than hours. It was also able to utilize existing commodity skills without significant training, integrate the existing mixed environment and minimize operational administration and maintenance. The underlying IaaS cloud capabilities allowed SLTN to be more efficient and to provide the full spectrum of cloud services to their own customers in a pay-as-you-go model—with better service and at a lower price point.
The benefits that these companies experienced are evidence that high-scale provisioning and cloud management capabilities can dramatically increase service capacity. For service providers of all stripes—whether deliberate or not—these benefits are a critical part of the evolution of cloud services and offer a meaningful way to deliver more value to themselves and their users.
Learn more about provisioning and orchestration capabilities
that are helping service providers to meet their growing business needs.