Modified on by cynthyap
Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.
But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”
The need to orchestrate really becomes clear when various aspects of cloud management are brought together. The value to the organization at this stage of cloud is simplifying the management of automation – otherwise a balancing act to manage multiple hypervisors, resource usage, availability, scalability, performance and more -- based on business needs from the cloud, with the ultimate goal of delivering services faster.
With orchestration, the pieces are woven together and can be managed more effectively to ensure smooth and rapid service delivery -- and delivered in a user-friendly catalog of services easily accessible through a single pane of glass. In essence, cloud orchestration = automation + integration + best practices.
Without cloud orchestration, it’s difficult to realize the full benefits of cloud computing. The stitching together of best practices and automated tasks and processes becomes essential to optimize a wide spectrum of workloads types.
In addition to rapid service delivery, the benefit of orchestration is that there can be significant cost savings associated with labor and resources by eliminating manual intervention and management of varied IT resources or services.
Some key traits of cloud orchestration include:
• Integration of cloud capabilities across heterogeneous environments and infrastructures to simplify, automate and optimize service deployment
• Self-service portal for selection of cloud services, including storage and networking, from a predefined menu of offerings
• Reduced need for intervention to allow lower ratio of administrators to physical and virtual servers
• Automated high-scale provisioning and de-provisioning of resources with policy-based tools to manage virtual machine sprawl by reclaiming resources automatically
• Ability to integrate workflows and approval chains across technology silos to improve collaboration and reduce delays
• Real-time monitoring of physical and virtual cloud resources, as well as usage and accounting chargeback capabilities to track and optimize system usage
• Prepackaged automation templates and workflows for most common resource types to ease adoption of best practices and minimize transition time
In short, many of the capabilities that we associate with cloud computing are really elements of orchestration. In an orchestrated environment, organizations gain tools to manage their cloud workloads through a single interface, providing greater efficiency, control and scalability. As cloud environments become more complex and organizations seek greater benefit from their computing resources, the need for sophisticated management solutions that can orchestrate across the entire environment will become ever clearer.
Learn more about how cloud orchestration capabilities can help your business. And join the Cloud Provisioning and Orchestration development community to test out the latest cloud solutions and provide feedback to impact development.
As part of the transparent development initiative, IBM SmartCloud Provisioning (formerly known as IBM Service Agility Accelerator for Cloud) launches a series of daily demos, starting from November 7th. Every session will take about one hour.
In this way you can have a look in almost real time at what is happening in IBM SmartCloud Provisioning development, learn about new and enhanced capabilities.
If you are interested in joining the sessions, here is the schedule in Central European Time (CET):
- Monday at 4:00 PM
- Tuesday at 11:00 AM
- Wednesday at 4:00 PM
- Thursday at 5:00 PM
- Friday at 11:00 AM
The sessions will be focused on image management.
If you would like to join, using your web browser, connect to
No password is required
Modified on by b1stern
I wanted to let everyone know that a Trial Virtual Machine is available for the SmartCloud Monitoring version 7.2 FP1 product. The Trial provides a 90 day trial of the software to monitor your virtualized environment and includes the Capacity Planning tools for VMware and PowerVM. These tools can help you optimize your virtualized environment and save money.
Within a few hours you can have the Virtual Machine up and running and monitoring your Virtualized environment.
This is a great tool if you are working with a customer on a Proof of Concept. Or, if you are customer, it is a really quick and easy way to evaluate the software.
The Trial includes the SmartCloud Monitoring product plus a little bit of extra content. It includes monitoring for:
PowerVM including (OS, VIOS, CEC, and HMC)
Citrix XenApp, XenDesktop, XenServer
Log File Monitoring
Agent Based and Agent-less Operating System monitoring
Integration with Tivoli Storage Productivity Center
Integration with IBM Systems Director
The trial also includes Predictive Analytics, Capacity Planning and Optimization for VMware and PowerVM
You can find the software at the following URL: https://www.ibm.com/services/forms/preLogin.do?source=swg-ibmscmpcvi2
If you have any questions or need assistance, you can send me an email at firstname.lastname@example.org
We know that cloud computing offers a myriad of benefits like rapid service delivery and lower operating costs. But it can also lead to challenges in data governance, access control, activity monitoring and visibility of dynamic resources—in essence, all aspects of IT security.
The IT organization must have the capabilities to both deliver services more quickly to meet the demands of the business and be able to provide high levels of security and compliance. In the past the delivery of the services was typically the bottleneck in providing new services, but now with automated cloud and self service delivery models the teams responsible for change management and security can quickly become the bottleneck due to manual processes and siloed tools.
For example, organizations need the ability to patch all of their systems, both physical and virtual, whether distributed or part of a cloud. Operations teams need better insight into and control of deployed virtual systems, including OS patch levels, installed middleware applications and related security configurations. And there can be too many security exposures with offline and suspended VM’s that haven’t been patched in weeks or months.
A holistic approach is needed that addresses rapid provisioning of services and automation of key security and compliance requirements. Together these capabilities can keep you in control of rapidly changing cloud environments. First let’s look at the capabilities needed in a cloud provisioning solution.
Cloud provisioning should combine application and image provisioning for workload optimized clouds and deliver:
· Reduced costs with automated high-scale provisioning; multiple hypervisor options and HW of choice
· Accelerated time-to-market with standardized pattern-based deployment for workload optimized cloud
· Image sprawl prevention with in-built advanced image lifecycle management capabilities
· Ease of adoption and clear roadmap to move to advanced cloud capabilities
Second, a unified endpoint management approach is required to provide visibility and control of your systems, regardless of context, location or connectivity, and needs to deliver:
· Heterogeneous platform support with seamless patch management for multiple operating systems, including Microsoft Windows, Unix, Linux and Mac OS, as well as hypervisor platforms
· Automatic assessment and “single click” remediation, which shortens time to compliance by automatically identifying necessary patches and enabling users to target and remediate endpoints quickly
· Enterprise-class scalability and security to provide proven scalability, including fine-grained authorization and access control capabilities
Explore these capabilities with the new IBM SmartCloud Patch Management.
With December's release of IBM SmartCloud Monitoring, Tivoli's venerable IBM Tivoli Monitoring product family, proven in data centers at the world's largest corporations, begins to adopt a "Cloud" posture. Sure, "Cloud" is a term bereft of a clear operational definition that we can apply at any given moment, and customers, analysts and vendors tend to bandy it about pretty freely these days. However, if we don't get too hung up what Cloud is or isn't, we can probably agree that it represents a migration from our traditional server-delivered infrastructure to one comprised of pooled computing resources shared by virtual workloads. Whether or not our customers are calling their virtualized environments "private clouds" today, and whether or not they've got a "cloud budget" that they're using for such initiatives, the fact that they're moving along the cloud maturity continuum at some pace seems inescapable, given IDC's assertion that we crossed the magical "50%" boundary last year, when half of all corporate workloads were running on virtual machines instead of physical ones.
If we're beginning to think in terms of clouds of pooled computing resources, it makes sense that we begin to deliver management solutions in the same way, right? If the server administrators, storage administrators and network administrators now report to a cloud administrator, we should begin to package solutions for those cloud administrators, combining multiple pieces of management technology into a single part number that customers can purchase and deploy. That's exactly what we've done with SmartCloud Monitoring. The discrete monitoring agents that are at the heart of IBM Tivoli Monitoring; OS monitors, application monitors, storage, etc., are as important as they ever were. Even though we're pooling those resources across virtual machines, we still have to monitor things like processes, CPU activity, IO throughput, and so on. We just need to add a layer on top of all that granular detail, so the cloud administrator can see, at a glance, what's healthy or unhealthy about his cloud environment, before drilling down into the nuts and bolts.
SmartCloud monitoring combines the VMware virtualization management features in ITM for Virtual Environments with virtual machine instance monitoring from ITM's operating system agents, to monitor a cloud infrastructure and the workloads running on it.
Our roadmap looks like an analyst's cloud maturity ladder, adding features such as automated provisioning, usage and accounting integration, and more detailed network monitoring, so our solution will "mature" along with the market, and customers' needs. See if the challenges along this ladder look like things that you or your customer have faced on their cloud journey, or are grappling with now. It's important to note that Tivoli has solutions that can be applied to each step, and for each problem. What SmartCloud promises is a way to bring those solutions together into more consumable bundles, tightly integrated together, to make cloud management simple to purchase and simple to deploy.
SmartCloud Monitoring delivers key capabilities for optimizing and maintaining a private cloud, including:
- Health dashboards, to provide an instant, consolidated glimpse into cloud health
- Topology views of the key interrelated components of the cloud
- Reports on the health trends of cloud components and workloads, powered by Cognos
- What-If capacity planning scenariosPolicy-Based optimization to put workloads where they’ll perform best, not just where they’ll fit
- Performance Analytics for right-sizing of virtual machines
- Integration with industry-leading Tivoli service management portfolio
Service Health for IBM SmartCloud Provisioning has officially GA'ed and is now available on IBM Integrated Service Management Library ( ISML ).
Service Health provides pre-built integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring utilizing a custom agent, OS agents, and the ITMfVE agents. A product provided navigator offers a concise overview on the health of the IBM SmartCloud Provisioning infrastructure enabling the ability to identify and react to issues in your environment quickly minimizing the impact, such as an unresponsive compute node, high disk usage on storage nodes or key kernel services not responding. It also provides visibility into the KVM and ESXi hyper-visors.
This solution can be downloaded from the IBM Integrated Service Management Library( ISML ) following this link -> Service Health for IBM SmartCloud Provisioning
With the proliferation of cloud computing, many businesses are starting to adopt a service provider model—either as a deliberate strategy to establish new revenue streams or, in some cases, inadvertently to support the growing needs of their organizations. This is especially true for companies with diverse needs, whether they’re tech companies with dev teams churning out new apps and services, or business owners driving requirements for SaaS services and cloud capabilities to enhance their data center operations.
In any event, the distinction between managed service providers (MSP) or cloud service providers (CSP), and companies growing in-house capabilities may not be as important as the common need to respond quickly and scale to support customer needs. The challenges facing all of these companies include facilitating the creation of new applications and services while maintaining quality of service, and the need for automation to reduce human resources and error from manual tasks—all with an eye to drive revenue and acquire new customers.
And so, the challenge for service providers of any kind is to increase scalability, automation and uptime while constraining costs. Companies are increasingly solving the critical piece of this puzzle by embracing rapid, high-scale provisioning and key cloud management capabilities to allow them to grow as quickly as their customers’ needs. In particular, the benefits accrue in four key areas.
First, applications can be deployed rapidly across private and public cloud resources.
Second, rich image management tools simplify complex and time consuming processes for creating virtual images and constraining image sprawl.
Third, operational costs can be lowered by leveraging existing hardware to support an array of virtual servers and diverse hypervisors.
And fourth, high-scale provisioning enables rapid response to changing business needs with near-instant deployment of hundreds of virtual machines.
While the spectrum of virtualization to orchestration functionality helps to manage their environments, high-scale provisioning in particular offers a cost-effective way to leverage capacity as a business commodity—a way for service providers to offer seemingly limitless capacity to their customers while lowering the relative costs of providing it.
In the case of Dutch Cloud, a CSP based in the Netherlands, a growing client base allowed the company to expand but it was very conscious of the costs and issues related to scalability, performance and security. By adopting a lightweight, high-scale provisioning solution for core service delivery, Dutch Cloud added capacity easily and was able to scale up rapidly without interruption to customer service. The CSP also reduced its administrative workload by 70 percent by adopting automation best practices. Monthly revenue has tripled twice in the last six months without an increase in operational costs.
Other service providers such as SLTN, a systems integrator serving large and mid-sized businesses, have experienced similar cost savings by extending platform managed services to a cloud delivery model. By implementing a low-touch, highly scalable cloud as its core delivery platform across multiple compute and storage nodes, SLTN was able to deploy new services in seconds rather than hours. It was also able to utilize existing commodity skills without significant training, integrate the existing mixed environment and minimize operational administration and maintenance. The underlying IaaS cloud capabilities allowed SLTN to be more efficient and to provide the full spectrum of cloud services to their own customers in a pay-as-you-go model—with better service and at a lower price point.
The benefits that these companies experienced are evidence that high-scale provisioning and cloud management capabilities can dramatically increase service capacity. For service providers of all stripes—whether deliberate or not—these benefits are a critical part of the evolution of cloud services and offer a meaningful way to deliver more value to themselves and their users.
Learn more about provisioning and orchestration capabilities
that are helping service providers to meet their growing business needs.
The new capacity planning tool, now available in Beta, will unlock the value of Tivoli Monitoring and Warehouse and enable a rich set of analytics on the existing data. This new capability will enable you to
- Understand and plan your virtual environments pro-actively, initially for VMWare
- Minimize risks in your plan
- Optimize how you use capacity in the environment with intelligent workload sizing and placement
- Apply business and technical policies to keep your environment efficient and risk-free
- Make changes in a what-if analysis framework and view the impact of change.
The tool leverages Tivoli Integrated Portal (TIP) and Tivoli Common Reporting (TCR) with embedded Cognos reporting engine. It integrates with the ITM and TDW infrastructure to get configuration and usage data from your virtual infrastructure.
Here's a quick overview of the advanced planning scenarios you can now implement in your virtual environment using this tool.
Key Scenarios for a Capacity Analyst
- Planning for capacity growth: Let's suppose your business provides a forecast that will increase the load on the IT infrastructure in the coming months. The capacity analyst can model the increase in resource requirements from the existing VMs in the what-if planning tool, scope the part of the infrastructure to analyze, and automatically generate a plan to fit the increasing demand. If required, new servers can be added to handle the growth.
- Ensure compliance with defined capacity planning policies: The LOB and application owners often provide a list of their requirements to the capacity analyst in terms of how their workloads should be placed on the IT infrastructure. These are typically business guidelines to improve efficiency, reduce cost, respect organizational boundaries, or cut risks on a virtual infrastructure. For e.g. The Finance and Payroll apps may not share common hosts or may not want to share hosts among apps with different downtime requirements. There may also be technical policies that guide planning. For e.g. reduce license cost by putting OS images on fewer hosts or the DBA may want to keep some headroom for the database VMs. The tool can help to centralize the creation of such policies and select a subset to guide a what-if planning scenario.
- Avoid bottlenecks in your environment: IT Administrators can predict a bottleneck in a VMWare cluster that may not be fixed by dynamic allocation within the cluster. These are often long term issues as the cluster may be running VMs that are not the right combination to share resources dynamically. The planning capability may be used to recommend how VMs can be moved “across” clusters or clusters can be restructured to remove bottlenecks and optimize resources in a broader scope.
- Plan for new users into a Cloud environment: The Cloud administrators are often challenged with planning for new users on the shared infrastructure and do what-if analysis planning. With this tool, they can simulate new VMs on the discovered Cloud, add information regarding users, create policies specific to such users, and create a recommended new environment plan. The policies may simulate users that want dedicated hosts for their VMs, or some images that need specific types of hardware etc. The recommended plan can help them to understand how and where to add new hardware or consolidate VMs to free up fragmented Cloud resources.
- Plan for retiring or re-purposing hardware: The planning capability enables the user to add new information for the discovered environment. For example, a user can add warranty date information about the discovered hardware, often contained in spreadsheets or other tools, and then select hosts that are more than 5 years old in the planning tool. They can add new hardware from the catalog for a what-if scenario. The tool can then automatically generate an optimized plan on how the workloads from the old hardware will fit on the new hardware and how many new machines of what type are required.
There may be several other scenarios that one can come up with on this tool framework.
The planning tool also provides a workflow-driven UI with both fast-path and expert-mode options. The main workflow page is shown below with a 5-step approach to create optimized virtual environment plans with default options for several steps. One can iterate through these steps to reach the desired results.
Load the latest configuration data of the virtual environment for analysis
Set the time period to analyze historical data
Define the scope of hosts to analyze in the virtual environment
Size Virtual Machines in scope
Generate a placement plan for the virtual machines on the physical infrastructure in scope
An example recommendation output of the tool is shown below with interactive topology navigation capability, summary views, and risk scores assigned to the infrastructure elements. This is an actionable recommendation as one can take this structured output in an XML and write an adapter to trigger automation workflows that implement the recommendations. The example screen shows how we analyzed a cluster with 4 hosts and recommended a consolidation on 3 hosts.
The topology view is interactive as it allows the user to click on various nodes and visualize the summary of the infrastructure levels below the node. Risk levels of the nodes are shown as node colors.
Hope this will be an exciting set of functions to start with and we look forward to suggestions on feature improvements and scenarios. Please contact Gary Forghetti (email@example.com) to schedule a demo or sign up for the Beta version of the tool. We'll keep updating this forum with more and more details, such as demo videos, white papers etc.
Anindya Neogi, PhD | Senior Technical Staff Member | Tivoli Software | IBM Master Inventor
My three favorite things about OpenStack are
- The People
- The Innovation
- The Interoperability
San Diego was my second OpenStack summit. Many of the same faces were in the design
summit sessions I attended, but there were many new faces as well. One of the most exciting observations from
the Folsom design summit was the incredible talent pool assembled. The Grizzly summit was no different – it’s
great to interact with so many incredibly smart, deep and experienced
people. I’m convinced that a single
company could never amass such a collection of quality talent for one
project. I guess it’s no wonder they’re
saying OpenStack is the fastest growing open source project ever.
I must apologize in advance, because I am sure to miss
someone, but I want to tell you about some of the people I interacted with in
the nova, glance, and cinder design sessions.
Over the past few months I’ve really been impressed with the PTL
leads. They’re very smart, highly
motivated, and excellent facilitators.
The design sessions invariably get into open debate, but productive
debate. I was impressed with the PTLs’
natural abilities to channel the discussion to bring out the key issues and
land on some concrete next steps.
I got to meet Microsoft’s Peter Pouliot, who’s heroic and
tenacious efforts successfully delivered HyperV support after a rather dodgey mess earlier in the year. Peter is not your stereotypical Microsoft
developer. He’s an open source guy
through and through. It’s clear that his
personal spirit had a lot to do with corralling the community to deliver
quality code in a very short time frame.
It was great to meet Peter and some of his non-Microsoft
colaborators. Great job guys!
I also had the pleasure to meet with some of VMware’s
developers and not just those acquired via the billion dollar Nicira
acquisition. The Nicira guys are great –
no question but I was also very pleased to meet the VMware developer who
completely rewrote the less than adequate VMware compute driver. I hope to work closely with them to ensure
the hypervisor is well supported and as interoperable as possible with other
properietary and open source technologies.
Of course, I can’t speak of OpenStackers without mentioning
RackSpace. Over the past two summits, I
got to interact with a number of RackSpace developers, aka Rackers. I got to hand it to them, they really do
have a great bunch of people and definitely
bring a massive scale service provider
perspective to the discussion. Of
course, being an IBMer myself, I can’t help but bring the enterprise customer
perspective into the mix. I think
OpenStack benefits greatly from these two perspectives brought together in open
OpenStack has done a great job defining an extensible
framework for IaaS. This flexibility not
only helps accommodate the varied needs from enterprise to service provider,
but it also enables a massive sea of innovation. Since the Nicira acquisition there’s been a
lot of attention on the innovation around software defined networking and
quantum, the OpenStack project that provides the abstraction for a variety of
implementations ranging from proprietary,
to pure open source like Open V Switch, to traditional standard
networking equipment. I think storage is
even hotter than networking these days with a slew of vendors combining
commodity 10Ge switches with commodity Intel servers with a combination of SSDs
and spinning disks to provide new approaches to storage for virtualized
environments. Of course software plays a
critical role in many of these virtualized storage solutions. Dreamhost’s open source distributed file system Ceph has been getting a lot of interest.
Enterprise storage vendors like NetApp, IBM, and HP have also
contributed cinder drivers to support their products within OpenStack
clouds. There were also a number of
summit discussions about exposing the different backend implementations of the
abstractions with different qualities of service. Some people, including one of my developers,
have begun to use “Volume Types” as a way to let users choose the kinds of
volumes they need. I believe this is
critical for compute clouds to cover the broadest spectrum of workloads. Of course this principle applies to other
resources and not just cinder volumes.
I saw a lightning talk about a nova driver for smartOS, a cool open source project from Joyent combining solaris zones, zfs, and kvm. There were ARM and Power CPU
support presented as well as a couple bare metal solutions. Intel, KVM, and OpenStack certainly make a
nice combination, but there’s so much more that’s possible with OpenStack
Finally, perhaps the most important thing about an OpenStack
cloud is interoperability. Starting with
the hypervisor, IBM has a solution that enables interoperability of images,
volumes, and networks across Xen, KVM, VMware and Hyper-V. We had a few sessions where we
discussed how we can bring the same interoperability to OpenStack. To start with, we need to be able to register
readonly cinder volumes as glance images.
Next to ensure we can scale out we need to be able to register multiple copies of the same image.
Finally, to take advantage of performance we need to abstract the clone
operation to enable Copy on Write (CoW), Copy on Read (CoR), as well as the
current local cache plus CoW mechanism for backwards compatibility and to
support 1Ge networks. Combining these
will enable images to work across multiple different hypervisors.
We also need interoperability with existing images which
means VMware and Amazon as the two most common forms of images. Today, it’s quite easy to automate simple
image formatting differences, but the challenge is in the assumptions made by
the images. The current direction for
OpenStack is to use config drive v2 to pass instance metadata to the
guest which is responsible to pull key system configuration such as hostnames,
credentials, and IP addresses. Typical VMware
images on the other hand generally expect either a push model, where the
hypervisor manipulates the filesystem prior to booting the image, or via their
guest agent, VMware tools.
To make matters worse, the current OpenStack assumes
different image formats for each supported hypervisor. One of the sad punchlines from Troy Torman’s keynote was that RackSpace’s private cloud distro named Alamo does not interoperate
with their publci cloud even though they’re both OpenStack. The good news is that, as Troy went on to
say, the time has come to focus on interoperability.
I got into a great conversation with Jesse Andrews, one of
the original OpenStack guys now at Nebula.
He described the approach to image interoperability by enabling cloud
operators to provide custom image workers at image ingestion time. This enables cloud providers to register
custom image processing code that gets called whenever an image is uploaded to
Glance. The simplest case of this is to
convert image formats to enable Alamo KVM images to run on RackSpace’s Xen
based public cloud.
Fortunately, IBM’s SmartCloud Provisioning (SCP) includes
some image management technologies which can help with the more challenging
problems mentioned above. Today’s SCP
2.1 will interrogate images in the library and check for cross hypervisor
compatibility. Users gain visibility
into this information and can optionally automate fixes wherever possible. We also use this technique to detect the
presence of a critical guest agent.
This brings me to one of my favorite little open source
projects, cloud-init created by Scot Moser at Canonical. If only it wasn’t GPL ;-). Many OpenStackers are using cloud-init to
automate the system configuration pull from config drive v2 mentioned above. This little bootstrap can do much much more,
but this is certainly a great job for this trusty little tool. Unfortunately, it’s only for linux. It’s even been made to work with fedora and
will likely be included in RHEL. Since
we cannot use GPL code in IBM products, we have a similar bootstrap for both
Windows and Linux guests. We’re working
with our lawyers to get approval to contribute this code to cloud-init. Of course,
if Canonical wants to use a more commercial friendly license like
OpenStack has done, then I could spend less time with lawyers and more time
hacking code ;-).
The beauty of this little bootstrap is its simplicity. This simplicity enables us to automatically
inject the bootstrap into Windows and Linux images. This will let us automatically fixup any old VMware,
or Hyper-V image so that it works on OpenStack.
This is a critical first step towards interoperability.
OpenStack is truly becoming an industry changing
and historic project. With so many
incredibly talented people from countless companies across the globe it’s no
wonder there is so much innovation in the community. I’m really happy to be a part of this growing
community. Together I believe we can
change the industry for the better. If
you would like to be part of this growing and innovative project, check out the
“community” link at www.openstack.org.
Also, we would like to invite you to check back here for future blogs on
OpenStack and IBM’s involvement. OpenStack
is a big part of IBM’s open cloud strategy and we want to be sure to keep you
up to date on our progress
The IBM® Image Construction and Composition Tool is a web application that simplifies and automates virtual image creation for public and private cloud environments, shielding the differences in cloud implementations from its users.
This white paper provides Software Specialists and other product experts with helpful tips and techniques to plan, design, and create software bundles in the Image Construction and CompositionTool.
Is Cloud the Next Utility, hmmm there's an interesting thought. I know many of us are still trying to define "what is Cloud" and "what does Cloud mean for my business." I spend my days in IBM's Cloud & Smarter Infrastructure team working on bringing "Cloud Solutions" to our customers. This turns out to be simply helping customers use what they have, maybe with a little twist, to build out their data center in a more robust, efficient manner. It's all about helping them meet the needs of their business. IT teams have been doing derivations of "do more with less" for years and as technology matures, specifically with virtualization and the management of virtual environments, IT teams are able to improve their Quality of Service, reduce their Total Cost of Ownership, while increasing the services they offer their consumers. How?, you ask. IT managers everywhere are trying to figure out how to provide their services in a Utility-like format utilizing "the Cloud." Think about how natural it is for us to expect that flipping on a light switch, magically makes electricity flows and the light comes on; how about if we could select a service, magically systems are provisioned, business processes are established and the service is available - would that provide business value? I'll use this blog post to start you thinking about "Cloud as the Next Utility" and get you wondering if we do indeed have front row seats to a Cloud Computing Revolution.
Should get I my electricity from the local power company or do I invest in sustaining my own private electricity? As individuals focus on a greener way of life, they may be asking themselves that question as they focus on using what they have while minimizing their own carbon footprint. People have options that they can ponder over to figure out what is best for their families, their life and their utility usage.
Organizations are iterating over multiple decision points to provide the growing IT infrastructure requirements to run their business and it is continually influenced by many different things. Do they already have a large IT investment? Do their IT requirements expand and contract depending on current activities? Do they have a diverse investment and skills on existing service management tools? It's apparent that these same questions can be asked regardless of the size of your IT shop - big or small companies have come to depend on IT services and this dependency continues to grow in this computing revolution. One direction an IT manager might take is a hybrid approach. Such a strategy allows them to utilize their current investment of infrastructure, tools and skills to build out their own Private Cloud while keeping a Public Cloud vendor accessible when their computing needs spike. Companies have options that they can ponder over to figure out what is best for their company's business, their IT environments and their utility usage.
How is it that when I buy a new appliance I don't even think about if it'll will work with my electrical outlets when I get home. Standardization on how electricity is delivered and consumed is ubiquitous and expected worldwide. As consumers, we buy a utility and have an expectation that it will work with all our appliances to meet our needs.
Most companies have already made large investments in both infrastructure and tools, as well as the skills to maintain them. The need to deliver more with what is available, requires technology that can bridge from existing environments to new environments regardless of the influences. By adopting and adhering to computing standards, tooling can utilize data from existing systems while meeting the needs and different delivery methodologies of the business in new ways. This means I can ease the coordination of complex tasks and workflows in my data center while leveraging the existing skills, processes and technology artifacts to deliver services in new, different ways. By employing computing standards in my data center strategy, I can be assured that I can rely on Public Cloud for my processing spikes; I can utilize both my older infrastructure while integrating in new infrastructure; all with the same extensible tooling for deployment and management of compute, storage and networking. As you're defining the next milestones in your cloud strategy/roadmap, I would encourage you to investigate what is in use today in your data center and how you need to evolve/ reuse that investment. IBM offers SmartCloud Orchestrator to manage your existing datacenter(s). It provides capabilities based on industry standards and can extend your current processes to Cloud, helping you bring Cloud Services to your users with the same self-service manner as flipping on a light switch or plugging in a new appliance. A utility that will work to meet your users needs.
Until next time, keep your head in the Clouds.
It’s been estimated that the number of virtual machines in data centers has increased at least tenfold in the last decade. More than fifty percent of virtualized environments now have more than one brand of hypervisor. The hypervisor promise of cutting infrastructure expense has given way to increases in licensing costs of more than three hundred percent. And the average number of images destroyed? Nobody knows.
In short, the challenges of managing virtualized environments are mounting. The benefits of virtualization—from cost and labor savings to increased efficiency—are being threatened by its staggering growth and the resultant complexity.
A critical piece to solving these challenges, as many organizations have already discovered, is image management. While there are many ad hoc and isolated solutions, there continues to be a real need for comprehensive image lifecycle management to combat image sprawl, get more visibility and analysis into where images are stored and how they are being used, and to ensure security through timely patching of images. This doesn’t necessarily mean jumping to cloud solutions, especially for businesses that aren’t ready to adopt cloud orchestration
yet, but rather, implementing image capabilities in the virtualized environment that are robust enough to help with high-value applications and the on-ramp to advanced cloud capabilities.
Simplifying image creation and deployment
Because images are easy to create and copy, it’s often difficult to decipher which images are crucial, where there is redundancy and where there may be a need for more governance. It is also an ongoing challenge to understand what an image consists of without launching it. This image complexity has resulted in IT spending a significant portion of their time on mundane or repetitive tasks such as manually building images and maintaining an image library.
Inserting automation best practices into the process of creating, deploying and managing images can result in immediate time and labor savings, with as much as 40-80% labor cost reduction by increasing image/admin ratio efficiency. Automation also helps to optimize the efficiency and accuracy of service delivery in the data center.
Once images are captured they can be deployed as often as needed. Paired with robust, automated, high-scale provisioning
, hundreds of new virtual machines can be deployed in minutes, increasing IT efficiency. They can also be customized based on user needs.
Improving image analysis
Key to effective image analysis (including image search, drift, version control and image vulnerability) is the use of a federated image library, which pulls together the storage and meta information of images across multiple image repositories and hypervisors.Image search:
With a large amount of image information to contain and understand, it can become difficult to determine the connection between images or their origin. A family-tree hierarchy and grouping of images with version chains simplifies image search by showing how images are linked, when they are in use and where they originated, even in a mixed hypervisor environment. Additionally, searching capabilities within images drastically reduces the complexity of finding the right image and associated information about it.Image drift:
Varying image iterations make it difficult to manage compliance and version control. Frequently, administrators are forced to maintain volumes of duplicate and unnecessary images because it is difficult to ascertain the need, use or ownership of images. Advanced image management can increase visibility into what is inside a virtual machine through a centralized image library, to determine opportunities to consolidate images, or determine if there are security threats from vulnerable images.
Increasing security with image patching
With the explosion of images to govern, there is a need to be able to detect vulnerability exposures in images to ensure that no virtual machines are created without the proper level of security patches. All systems, both physical and virtual, need to be patched whether they are distributed or part of the cloud. A simplified, automated patching process can administer virtual images from a single console so you have the scalability to patch as quickly as you can provision
, allowing users to maintain golden and copied images in a patched state. With this patching capability, policy enforcement can be accomplished and proven in minutes instead of days, and IT can increase the accuracy and speed of patching enforcement, achieving as much as 98% first pass patch success rate in hours.
The benefits of a comprehensive, integrated image management solution are immediately obvious. Best of all, there is a high degree of reward with very little risk.
And with image sprawl under control, organizations can expand capabilities for richer end-to-end service management across the virtualized infrastructure such as performance management and data protection as well as look to higher value cloud capabilities for faster service delivery.
Even if you
weren’t at IBM Pulse, trending right now on the web is chatter about IBM’s
announcement to leverage open technologies pervasively in the development
of its cloud offerings.
SmartCloud Orchestrator—an integrated platform to standardize and manage
heterogeneous hybrid environments—IBM is launching its first commercial offering
based on OpenStack. And with SmartCloud Orchestrator, IBM is also redefining
the scope of orchestration
to encompass the streamlining and integration of all resources, workloads and
for this kind of capability is addressed in the latest IDC
report which discusses why it will become a priority as organizations look
to improve operational efficiency and reduce the mess and complexity of growing
to standardize and automate cloud services includes integrating performance and
capacity management, usage and accounting, and rich image lifecycle management.
In addition, services and tasks such as compute and storage provisioning,
configuration of network devices, integration with service request and change
management systems and processes can all be streamlined. Out-of-the-box robust
workload patterns also enable fast development of cloud services.
SmartCloud Orchestrator, it’s all brought together to seamlessly manage
heterogeneous environments, allowing organizations to build on existing
investments and open source technologies.
haven’t had time to catch up on what’s trending, here’s the short version on
how IBM is helping to advance
the cloud to drive innovation.
TUC Webcast: Maximize the Benefits of Virtualization for Greater ROI
Please join the Tivoli User Community
for a live Webcast and opportunity for Q&A, Thursday, February 21, 2013 at 11:00 AM ET USA.
Click here to reserve your webcast seat now
About this Webcast:
Challenges with virtualized environments are driving the shift to
increased integration of service management capabilities such as image
and patch management, high-scale provisioning, monitoring, storage and
security. In this webcast, learn how organizations can realize the full
benefits of virtualization including reduced management costs, decreased
deployment time, increased visibility into performance and maximized
About the Speaker:
Matthew Rodkey, Product Manager focusing on Tivoli Cloud Solutions
Matt Rodkey is a Product Manager focusing on Tivoli Cloud Solutions.
In 13 years with IBM, Matt has worked in a number of areas in the Tivoli
portfolio including Security, Monitoring, and Service Delivery.
The Official Tivoli User Community is the largest online and offline
organization of Tivoli professionals in the world – home to over 160 local User
Communities and dozens of virtual/global groups from 29 countries – with more
than 26,000 members. The TUC community offers Users blogs and forums for
discussion and collaboration, access to the latest whitepapers, webinars,
presentations and research for Users, by Users and the latest information on
Tivoli products. The Tivoli User Community offers the opportunity to
learn and collaborate on the latest topics and issues that matter most.
Membership is complimentary. Join NOW!
Starting from December 9th 2011 IBM SmartCloud Provisioning 1.2 is available for download.
The key features introduced in this release are:
Full product install through an interactive tool:
IBM SmartCloud Provisioning can now be installed using a graphical
wizard. Two flavours of the installer minimal and custom. The custom
installation allows to specify the number of instances of HBASE and
Zookeeper to be deployed. Moreover it allows to automatically configure
ESXi servers as compute node. The creation of the management virtual
image on VMware is automated.
Support for multiple networks:
you can now deploy images with more than one NIC. Different users can deploy images in segregated networks.
Integration of the Image Construction and Composition Tool:
The Image Construction and Composition Tool
helps building and customizing master images. It is designed to
facilitate a separation of concern and tasks, where experts build software bundles for reuse by others. This design approach greatly simplifies the complexity of virtual image creation and reduces errors.
Support of Open Virualization Format (OVF):
OVF images that can be created or modified by the Image Construction and Composition Tool
OVF metadata can be displayed and modified in the Self Service UI
Integration of the Virtual Image Library component:
The Virtual Image Library helps managing the life cycle of virtual images:
-Search images for specific software products
-Compare two images and determine the differences in files and products
-Find similar images
-Track image versions and provenance
The cloud administrator can use a brand new UI to perform tasks like
registering images, registering networks, managing quotas, assigning
roles, managing elastic IPs