KVM is well known as a Hypervisor for Linux, and its
integration with the Linux kernel and inclusion in the Linux build tree makes
KVM a natural choice for Linux.
But why would anyone use KVM as a Hypervisor for Windows ? Isn't that counter-intuitive, and wouldn't Hyper-V or VMware be a more natural choice ?
Think again. IBM uses KVM as the Hypervisor for its "IBM SmartCloud Enterprise" public cloud - for both Linux and Windows instances. And you might want to do the same. To understand why, we need to dig deeper on what a hypervisor needs to do - and how hypervisors relate to operating system kernels.
Fundamentally, a hypervisor needs to create and run virtual machines, allocate and manage memory, protect different virtual machines from trampling over each other, share the processor(s) between different virtual machines, and interface to the hardware devices. Yes, of course there's a lot more complexity in doing this, and especially in doing so efficiently, but the hypervisor is in effect a mini operating system - without the complexity of the graphical user interface, command line utilities and so on.
What KVM does is it plugs into an existing operating system - Linux - and turns it into a standalone hypervisor. Yes, a hypervisor which runs on the bare metal and uses the hardware virtualization support included in recent x86 processors.
KVM then uses the existing capabilities of Linux to allocate memory, provide security, and schedule the processor(s) to give the right amount of time to each virtual machine. It doesn't need to reinvent the wheel - Linux already provides these functions, and does so very well.
And then you can take this combination of KVM and Linux, remove the operating system code you don't need, and you end up with an optimized, efficient, standalone hypervisor. Red Hat have done just this to produce RHEV-H, or Red Hat Enterprise Virtualization - Hypervisor.
Now you've got your standalone hypervisor - why would you use it for virtualizing Windows ? Here's three reasons for starters
* KVM offers a lightweight, high performance, low cost hypervisor
* KVM can support both Linux and Windows guests, so providing a common hypervisor for mixed environments
* KVM can use the advanced security included in Linux through SELinux to provide mandatory access control protection between virtual machines
So next time you think about KVM, don't think about it as just a hypervisor for Linux. Think of KVM as a standalone hypervisor for both Linux and Windows.
Program Director - Cross-IBM Linux and Open Virtualization Strategy
Blog Authors: IBM Software Defined 2700052JD4 Virtualization+IBM 2700039S5C Nitin_Gaur 12000056JB Jean Staten Healy 2700025BBU John_Foley 0600026N82 SamVanAlstyne 110000DM6B alicia_wood 270003DW0M Virtualization combined with Integrated Service Management helps you use your resources effectively, manage your infrastructures efficiently and gain the flexibility to meet ever changing business demands. This blog is for the open exchange of ideas relating to virtualization across the entire infrastructure. Articles written by IBM's virtualization experts serve as conversation starters. Topics can range from latest technologies for server consolidation and tools for simplified systems management and monitoring to automating IT systems to respond to changing business conditions and cloud-based solutions for the "virtual" enterprise.
Virtualization+IBM 2700039S5C 24,857 Views
KVM is well known as a Hypervisor for Linux, and its
integration with the Linux kernel and inclusion in the Linux build tree makes
KVM a natural choice for Linux.
Virtualization+IBM 2700039S5C 3,077 Views
(Originally posted on IBM Smarter Computing tumblr blog)
As we enter an era of Smarter Computing, IT organizations are facing exploding demand. Data is more than doubling every two years and new services with greater quality are requested. All this, on budgets that on average grow less than one percent per year.
As IT organizations learn how to do more with less, virtualizing servers, storage and networks can help them achieve a simpler, more scalable and cost-efficient IT infrastructure. Proper management of the virtualized infrastructure also improves the speed of deployment of new services. The road to improved business agility has four distinct stages that range from securing IT efficiency in the consolidation stage, to gaining business effectiveness in the optimize stage.
Companies frequently start by virtualizing servers. This can deliver immediate benefits from lower capital expense and reduced energy costs: For example, Edith Cowan University in Australia consolidated a large, distributed, older infrastructure of systems and storage into an end to end, cost effective solution using virtualization on IBM System x. They reduced their physical server count from 600 to 100, achieved significant savings in power and cooling, and freed up administrator time for higher value projects.
Further benefits are available by using IBM Systems Director to manage physical and virtual resources across the entire IBM Systems portfolio (System x, Power, System z, storage, networking) and across multiple virtualization environments (KVM, VMware, etc.). Companies who have implemented Systems Director achieve important savings such as reducing server management costs by up to 34 percent. And using additional tools from IBM Tivoli, IT administrators can deploy new workloads and services more rapidly across IBM and non-IBM environments.
The virtualization journey offers a solid foundation for cloud computing. Clients like China Telecom Jiangxi (.pdf) rely on IBM’s virtualization solutions and expertise to achieve the flexibility and economic benefits of Smarter Computing. Using IBM Power servers, IBM PowerVM and IBM Systems Director VMControl, China Telecom Jiangxi created cloud landscapes and managed pools of virtual systems. They used the IBM SAN Volume Controller (SVC) to virtualize and manage storage. With this IBM solution, they reduced time to market for new offerings from months to days, improved utilization, cut hardware costs by over 50 percent, and reduced power requirements and CO2 emissions.
IBM also provides clients choice, by supporting open source virtualization technologies such as KVM that are cost effective, and offer enterprise-class performance and scalability. In May, IBM helped found the Open Virtualization Alliance, an industry consortium focused on driving market adoption of KVM and fostering an ecosystem of KVM based solutions. Since then, more than 170 members have joined, many of them virtualization, datacenter and Cloud solution providers. This fast pace of enrollment illustrates the excitement we see in the industry around KVM, and the customer demand for an open alternative in virtualization.
IBM’s virtualization solutions are a critical factor of Smarter Computing and the foundation for cloud computing, helping to improve business agility and staff productivity. IBM consistently demonstrates the economic benefits of virtualization on our range of server and storage platforms, and with that addresses the biggest challenges that CIO’s and IT architects face today.
Virtualization+IBM 2700039S5C 3,654 Views
The IBM Research Compute Cloud (RC2) is a private cloud for internal IBM use that currently hosts over 2000 running VM's. Over a year ago, we changed RC2 to primarily utilize KVM for its virtualization. We had to convert most of our existing RHEL base images and user-images that were used in Xen VM's into a KVM-compatible format. We were able to automate that conversion reliably off-line using "chroot" and loop-mount based techniques to install non-Xen kernels and update the grub configuration inside the images. Our switch to KVM enabled us to support a much wider range of Linux distributions because the native IO and virtual IO emulation built into KVM just worked with Linux distributions without complications. The upcoming version 3 of RC2 is still using KVM, but is using a beta version of the IBM Tivoli Virtual Deployment Manager as the back end deployment engine instead of Tivoli Provisioning Manager workflows. Both of these deployment engines still leverage libvirt as a way to manage the definition and life-cycle of the KVM-based VM's.
IBM Software Defined 2700052JD4 5,476 Views
This summer IBM shared plans to extend support for Kernel-based Virtual Machine (KVM) technology to the Power Systems portfolio of server products. On the surface, the announcement sounds simple enough. But like many of IBM’s initiatives, there is a substantial behind-the-scenes effort going on in an open source community to enable this innovation. In this case, much of the work is being done in the QEMU community.
What is the significance of QEMU?
QEMU stands for Quick EMUlator. It is maintained by an open source community. As the name implies, it started out as an emulator. It includes a virtual machine environment for several architectures – x86, Power, System 390, among others. However, KVM doesn’t use the processor emulator part for KVM – it just uses the virtual machine environment.
Although QEMU does not get as much attention as KVM, the technology is critical to the open source virtualization that KVM enables. The QEMU project is strategic to KVM. You can’t have a hypervisor without a virtual machine environment within which to run the operating system.
A hypervisor is comprised of the virtual machine monitor, which enforces isolation among running workloads, and the virtual machine environment, which provides the virtual hardware. For the KVM hypervisor the KVM kernel module provides the virtual machine monitor, while QEMU provides the virtual machine environment These are two open source projects that are combined to create the full hypervisor.
Some of the reasons the QEMU project is important to IBM are the same as for KVM. It is an open source project with a strong community. It moves quickly to implement new features, enabling us to bring innovation to IBM customers. In fact, our recent work on KVM for Power last year put us into a tie with Red Hat for contributions to the QEMU community.
What does QEMU enable?
Most of the new features that people are seeking in the KVM hypervisor are actually provided by QEMU components. Think about this:
In fact, most of the KVM tuning and ease-of-use features that are scheduled for release over the next year have also been developed within QEMU. In addition, most of the features that are being developed to make KVM more scalable and faster have also involved both a QEMU component and a KVM component.
IBM support for QEMU
When you implement a new feature in KVM, frequently it is necessary to implement a counterpart in QEMU to take advantage of that new KVM capability. As a result, there tends to be a large overlap in the developers that are working in KVM and QEMU.
IBM is committed to supporting QEMU development, and is investing many developer hours into the project. Over the years, IBM has participated in many open source projects, including Open Stack, , Apache and Eclipse in addition to Linux. We are approaching the QEMU project with the same intensity.
Mike Day - IBM Distinguished Engineer and Chief Virtualization Architect, Open Systems
IBM Software Defined 2700052JD4 6,887 Views
A behind-the-scenes peek at the most-attended sporting event in the world
When the U.S. Open kicks off on August 26 it will draw millions of tennis enthusiasts from all over the world for two intense weeks of non-stop world-class tennis action. Fans will watch events unfold not only at the USTA Billie Jean King National Tennis Center in Flushing Meadows, New York, but also through an integrated online, mobile and social experience delivering real-time play-by-play action, live video streaming and live match scores and statistics, ensuring that every fan experiences the thrill and excitement on center court. I am proud to say that for more than 20 years, the US Open is relying on IBM as the end-to-end IT provider to enable and deliver this interactive experience though a myriad of fan-facing technologies.
Understanding that it is not possible for all tennis fans to make it to New York for the matches, it is a priority for the USTA (United States Tennis Association), the governing board for tennis in the United States, to provide content and information to them any way they want at any time of day or night. To support the USTA’s goal, IBM delivers the US Open experience to fans through the digital platform, and maintains uninterrupted availability of service throughout the event.
The capabilities provided to US Open tennis fans continue to expand. To name a few, there is the popular SlamTracker application that provides real-time scoring to fans for all matches. IBM’s “Keys to the Match” analysis is built into the SlamTracker application. The Keys’ are generated by using IBM predictive analytics software to analyze over 41 million data points from the past eight years of Grand Slam data for all men’s and women’s matches. This feature helps fans understand the important things a player must do to increase the likelihood of winning a match. And, mobile support has been expanded to include iPads as well as iPhone and Android devices. Fans that are physically at an event or watching the US Open on television at home often want access to digital information, to join the conversation on social media and to achieve a greater sense of control. The US Open “second screen” experience enables more fan interaction.
Social Media Insights Enhance Fan Experience and System Availability
IBM’s analysis at the Open has expanded to encompass social media. This helps to determine the most popular players, and aids IBM in ascertaining - as play is in progress - the matches that are likely to have the greatest fan traffic.
Behind the scenes, we are using IBM analytics to predict, allocate and monitor capacity in the Cloud. By analyzing tournament, player and social data, the system continuously predicts traffic expected to the Web site and automatically allocates or deallocates the appropriate resources. Applying predictive analytics in the cloud enhances the online user experience. Since we can optimize projections in order to add or reduce capacity, everyone can have an optimal experience. It also helps save dollars since allocating capacity only when its needed means servers aren’t sitting idle when they are not needed. All of our systems are able to generate highly accurate forecasts of how much traffic to expect, but we also look at our log history and social media discussions to figure out if there is spike in interest around a certain match that may translate into a rapid need for additional capacity.
A great example of the insight social media can provide is the Australian Open in 2012 when a match between Novak Djokovic and Rafael Nadal went on for nearly six hours. As it continued, it became clear that this was one of the longest finals matches in Grand Slam history. Once that happened, people started tweeting about it, and social media discussions sprang up. This drove additional traffic to the website because people wanted to witness it themselves.
Elastic Capacity Enabled by IBM and Open Source Technologies
The elastic capacity of the IT infrastructure supporting the US Open is made possible by the IBM SmartCloud technologies. This enables fast creation and dynamic allocation of resources transparently to users, while also supporting real time access. The US Open cloud environment includes virtualized IBM servers, software and storage across the globe, and supports the continuous availability and scalability required. The capabilities provided by IBM Monitoring optimize workloads and provide visibility necessary to allocate resources and more intelligently plan future growth.
Like most big shops, we are not homogeneous. We rely on our own IBM technologies and open source. Real-time and historical data analytics is enabled by IBM Smarter Analytics which is a combination of products including IBM InfoSphere BigInsights built on top of Apache Hadoop and IBM InfoSphere Streams. We also use a variety of servers, including both IBM Power Systems and IBM System x, for our cloud. And, we use a mix of operating systems. We rely on Red Hat Enterprise Linux, SUSE Linux Enterprise Server and AIX. We have different capabilities that we must support and which require different operating systems. Cloud enables us to do that very easily. We also use the KVM (Kernel-based Virtual Machine) hypervisor to manage our virtual machines on System x – and IBM has just announced that KVM on Power will soon become available.
The reality is that each platform has its own attributes and that is why we include them in the mix. For example, Power‘s Logical Partitioning (LPAR) divides a server’s resources into virtual “logical” partitions, and we continually take advantage of the LPAR mobility aspect of Power Systems. Power allows us to migrate live workloads from one physical frame to another without any impact. If we have a failure on one of our machines, we can do what we call “frame evacuation,” and move all the running servers including the databases to another machine, then make a repair, and move them back. You can do this on the fly in the middle of the day, in the middle of a peak match, without any impact to the business and, for us, that is critical.
The good news is that when the US Open happens in Flushing Meadows starting this week, tennis fans will have the highest quality experience possible, whether they are at the Tennis Center in person, or monitoring the matches online. Even better news is that all of these technologies can be applied to a wide range of use cases across many industries and are available today.
You can learn more at: http://www.ibm.com/usopen
Software Engineer and Master Inventor, IBM
IBM Software Defined 2700052JD4 2,416 Views
Last week, IBM was the premier sponsor of the Red Hat Summit in Boston for the ninth year in a row. This conference is a highlight for me each year because it gives both companies the opportunity to showcase the joint solutions we deliver to our clients, hear what mutual customers have to say first-hand, and provide a peak at what will be coming in the year ahead. There is always a lot of energy at the Red Hat Summit spurred by thought-provoking presentations and the unveiling of major innovations. This year was no exception.
Kicking off IBM’s participation in the Red Hat Summit, Arvind Krishna, GM Development and Manufacturing, IBM STG, delivered a keynote in which he announced new IBM initiatives to further support and speed up the adoption of the Linux operating system across the enterprise. Arvind told attendees that IBM is opening two new Power Systems Linux Centers in Austin, Texas, and New York in addition to the Power Systems Linux Center launched in Beijing in May. Arvind also spoke about IBM’s plans to extend support for Kernel-based Virtual Machine (KVM) technology to the Power Systems portfolio of server products – giving IBM Power customers an open choice. The new centers will make it easier for developers to build new applications for big data, cloud, mobile and social business computing using Linux – and in the future, KVM – with the latest IBM POWER 7+ processor technology. Signifying the importance of these announcements, the news was covered widely in the news media, including Forbes' DividendChannel, ZDNet, eWeek, Linux and Life, Computer Business Review, and The Register.
At the Summit, Red Hat, announced the global availability of Red Hat Enterprise Virtualization 3.2, which builds on the industry-leading performance of the KVM hypervisor to offer an enterprise-class data center virtualization and management solution, with fully supported Storage Live Migration and a new third-party plug-in framework. Red Hat also announced that IBM is joining the Red Hat OpenStack Cloud Infrastructure Partner Network, the availability of the new Red Hat OpenStack Certification, and the launch of the Red Hat Certified Solution Marketplace. The Red Hat Certified Solution Marketplace already includes more than 500 products that have been certified as OpenStack compute (Nova) compatible, from technology leaders – including IBM. IBM’s collaboration with Red Hat and the OpenStack ecosystem is in line with our commitment to give clients the flexibility, cost-effectiveness, and security that is necessary for cloud computing – both now and in the future.
It was clear at the Summit that cloud is on our customers’ roadmaps. Both IBM and Red Hat understand the importance of the cloud and the critical role that Linux and KVM play in the cloud. Whether it is private, public, or hybrid, we know customers have to virtualize to get there – and both IBM and Red Hat are committed to KVM as the virtualization hypervisor.
There were many other high points at this year’s conference as well. In our booth, IBM profiled technology from IBM PureSystems, IBM System x, IBM BladeCenter, IBM Power Systems, and IBM System z, and demonstrated the latest IBM solutions for cloud computing, open virtualization with KVM, and big data. I also had the opportunity to moderate a panel discussion in which representatives from IBM, Red Hat, and the University of Connecticut participated. The discussion focused on common Red Hat Enterprise Virtualization, KVM, and OpenStack use cases and the business benefits that are being realized. I was pleased to see a packed room with the audience asking many more technical questions about KVM than in prior years.
As I left the conference this year, I was struck by the thought that something was very different. Whether customers are discussing the use of KVM in the cloud, or adding it as a second hypervisor for “hyperdiversity,” the debate about whether KVM is technically ready is now over. It has achieved impressive SPECvirt and TPC-C benchmarks, security certifications, and according to IDC, is showing impressive growth in unit shipments. We are no longer explaining what KVM is. Instead, this year, we were able to show a robust portfolio of clients that have realized success with KVM. The conversation around KVM has changed.
Jean Staten Healy - Director, Worldwide Linux and Open Virtualization, IBM
IBM Software Defined 2700052JD4 6,570 Views
Just a few years ago, many enterprise customers predicted they would never use cloud computing because it was too risky. Fast forward, and today the picture is a stark contrast. Compelling economic advantages have trumped all other concerns. Worldwide revenue from public IT cloud services, which exceeded $21.5 billion in 2010, will skyrocket to $72.9 billion in 2015, representing a compound annual growth rate of 27.6% - four times the projected growth for the worldwide IT market as a whole, according to IDC cloud research.
Once that initial leap to the cloud has been made, what else do organizations look for? It is clear that they want a choice of hypervisor technologies for their cloud deployments – including open source options such as KVM (Kernel-based Virtual Machine). According to a recent IDC white paper, “KVM: Open Virtualization Becomes Enterprise Grade,” cloud providers are embracing KVM. Many prominent public clouds are built on KVM, including the Google Compute Engine, HP Cloud, and IBM SmartCloud Enterprise. KVM has also become the unofficial reference standard for OpenStack, and is the choice of over 95% of OpenStack clouds, the IDC paper reports.
Beyond service providers, organizations that are deploying private clouds are also more amenable now to using a new hypervisor. This is the result of hypervisor technology being increasingly viewed as offering a range of enterprise-grade alternatives. The IDC white paper points out that, when asked in a survey which hypervisor they would prefer to use with their private cloud system, more than half of respondents said they would like to use a new hypervisor rather than the existing one. In addition, IDC says that when choosing the second hypervisor, companies are equally likely to choose an open source solution as a proprietary one, a result of maturation of open source technologies.
Why do organizations choose KVM for the cloud?
Cost – For anyone deploying cloud services, but particularly for cloud service providers which are competing for business, the ability to provide a high level of service while keeping infrastructure costs down is critical. For example, DutchCloud, a cloud service provider, has found that using IBM SmartCloud Provisioning enables it to bring in customer environments on VMware and reduce costs by moving them to KVM. Not only is KVM affordable, but for organizations that are already using Linux servers, KVM is already included in the main enterprise Linux distributions.
Flexible tooling – Since there is no single management infrastructure that must be used, KVM enables choice in terms of cloud and virtualization management. Companies can build their own toolset, or they can use a variety of products, including OpenStack, as well as IBM products such as SmartCloud Provisioning and SmartCloud Orchestrator which support KVM. Solutions that support multiple hypervisors enable KVM to easily be added to the mix to take advantage of its lower costs.
Scalability and fast provisioning – KVM can pack virtual machines very densely on a host, as demonstrated in a recent SPECvirt benchmark, resulting in great efficiency. KVM also uses thin provisioning, which means that the guest image file is compressed, so only a portion of the file is transferred over the network to the host machine. This enables organizations to start up the guest quickly, an important consideration for cloud deployment.
Security – KVM benefits from SELinux, which enables it to provide Mandatory Access Control and enforced isolation of virtual machines. Proving the high level of security provided by SELinux and KVM and setting the stage for broader enterprise adoption, Red Hat and SUSE enterprise Linux distributions with KVM have achieved Common Criteria Certificates at EAL 4+.
Today, because of these compelling advantages, many of our clients are choosing KVM, both for public clouds and private clouds.
Jean Staten Healy
Director, Worldwide Linux and Open Virtualization, IBM
Virtualization+IBM 2700039S5C 3,057 Views
IBM SmartCloud Provisioning is an workload optimized cloud which combines infrastructure and platform capabilities that allows quick cloud deployment – and support for KVM and multiple hypervisors helps keep costs under control.
Requirements for public and private cloud provisioning have similarities, but there are also key differences. All cloud providers, whether private or public, are concerned with availability and security. But public cloud providers have the added requirement to remain flexible to meet a wide range of customer deployment needs, while at the same time, keeping a firm grip on costs both to remain competitive as well as to ensure their own profitability. IBM SmartCloud Provisioning which was designed specifically in terms of infrastructure-as-a-service can play a role in all of those areas and provide additional capabilities with rapid composite application deployments.
Rather than requiring service providers to build a cloud from scratch using virtualization management tools, IBM SmartCloud Provisioning offers a high-scale, low-touch cloud provisioning system. It is a hypervisor-agnostic, infrastructure-as-a-service solution enabling fast, automated cloud provisioning, parallel scalability, integrated fault tolerance and a foundation for more advanced cloud capabilities. In addition, the private cloud environment offers near-zero downtime and automated recovery from hardware and software failures across heterogeneous platforms.
Support for Open Standards and Hypervisor-Agnostic
While IBM SmartCloud Provisioning was originally built on top of KVM (Kernel-based Virtualization Machine), support has been expanded to include VMWare ESXi, vCenter, PowerVM, HyperV and Xen as well. Support for multiple hypervisors is where we think the industry is going, and the benefit of KVM support in the mix is revealed when you look at the needs of the cloud providers.
Cloud providers are on very tight budgets and they will succeed in terms of selling their services only if they are able to provide IT services to customers at a lower cost than the customers could provide for themselves, so it is very cost-competitive. In order for the cloud service providers to make a profit, the cost of the underlying infrastructure is really important. And then, to retain cloud customers, the reliability, speed and scalability are also very important.
For example, Dutch Cloud is a leading ISP based in the Netherlands, focused on SME customers in a few key industries including healthcare and electronics. It provides a range of cloud based services – from fully managed IaaS through to disaster recovery solutions. Customers select DutchCloud for the quality of service delivered and its service assurance.
Dutch Cloud wanted to improve the delivery of its cloud services in terms of cost, speed, and agility, and minimize administration, as well as scale delivery costs to business volumes. Since implementing SmartCloud Provisioning, Dutch Cloud has been able to deploy new services in seconds rather than hours, and has even deployed hundreds of new VM instances in minutes. Adding the cost efficiency, Dutch Cloud has also been able to move a number of its customers from proprietary hypervisors to the more affordable KVM.
Because SmartCloud Provisioning is hypervisor-agnostic, you can match it with a range of hypervisors including VMWare ESXi, vCenter, PowerVM, HyperV and Xen. There are obviously going to be times when a client indicates a preference for a particular hypervisor. But when there is no specific preference and service is all that matters, then from the cloud provider’s point of view the decision plays out this way: If you have got equivalent capabilities in terms of hypervisor, and equivalents in terms of the virtualization management – because IBM SmartCloud Provisioning is available across a range of virtualization technologies – then it comes down to cost, and KVM wins there hands-down. SmartCloud Provisioning’s multi-hypervisor support enables the provider to offer a range of virtualization options without locking the customer in, and because KVM is lower cost than proprietary alternative, it opens up a level of affordability that would not be possible otherwise.
And in terms of security, for public sector customers in particular, KVM’s Common Criteria Certification at Evaluation Assurance Level 4+ (EAL4+) is significant. It means that, like other hypervisors, the KVM hypervisor on Red Hat Enterprise Linux and IBM x86 servers now meets government security standards allowing open virtualization to be used in homeland security projects, command-and-control operations, and throughout government agencies that previously were limited to proprietary virtualization technologies.
KVM also goes beyond competitors in terms of security with SELinux or Security-Enhanced Linux which provides much greater protection and isolation between virtual machines, and enables mandatory access control as opposed to just discretionary access control. With discretionary access control, permissions are based on a user’s role, whereas with mandatory access control a user has to be specifically authorized in order to access a particular resource. This means that if a virtual machine goes wrong and attempts to impersonate someone with a high role, then it can get around discretionary access control. But with mandatory access control, if a virtual machine goes wrong, it doesn’t matter because it still does not have the permission – so it is very important in terms of military-grade security which is why SELinux was actually developed by the National Security Agency. And, because KVM is based on top of Linux it is then able to use that for the virtual machines.
The Bottom Line for Cloud Providers
There are several things that cloud service providers have to consider in terms of provisioning a cloud. The first is the cost of software, second is the level of virtual machine density that can be achieved – in other words, on a particular piece of hardware how many virtual machines can go on that piece of hardware and still maintain a good quality of service because obviously the more virtual machines on the hardware, the lower the unit costs. And then, it is about what is the overall quality of service that is provided in terms of reliability, and performance, and finally, it is about management. What IBM SmartCloud Provisioning is all about is this: How do you provision high numbers of clouds very quickly with lots of instances of virtual machines on clouds with minimal need for administration – achieving maximum automation, maximum self-healing, and maximum detection of failures and recovery from failures.
Test drive a full version of IBM SmartCloud Provisioning with the 60-day no charge trial
Jean Staten Healy
Director, Worldwide Linus and Open Virtualization, IBM
Virtualization+IBM 2700039S5C Tags:  hypervisor ibm adam systemsdirector vmcontrol gabriel kvm hyperversity linux jollans consulting 7,338 Views
Last month I blogged about the surprising level of hypervisor diversity that we’re seeing in use by customers – as shown by a report published by Gabriel Consulting and based on a survey of hundreds of IT professionals.
Now I want to discuss what’s behind this – why are so many customers mixing x86 hypervisors, and what are their reasons?In essence, it comes down to three factors – lower cost, technical differences, and customer ability to manage multiple hypervisors.
You can read more about what’s driving hypervisor diversity in the second new Gabriel Consulting Report ‘Hyperversity’ Driven by Technical & Cost Differences.
Cost differences matter
The first and most obvious factor is cost. We’re seeing the familiar cycle of high-priced proprietary technologies being challenged by lower-cost open source innovation – the same situation that played out with Linux, Eclipse and Apache.
Although the Gabriel Consulting report shows that customers value proprietary hypervisor technology, it also shows that the costs of implementing this everywhere can often be too high and half of the respondents agreed with the statement “Cost issues make standardizing on one suite too expensive…”
We’re also hearing this from our customers, from large banks to cloud providers. Cost is one of the main reasons they’re adopting KVM.
But according to the report, while cost is a driver for hyperversity, it isn’t the major driver. Intriguingly, that’s technical factors.
Technical differences matter even more
71% of respondents agreed with the statement “Technical differences between various solutions drive hypervisor diversity”.
The first and most obvious factor behind this is affinity between the hypervisor and the operating system. This is clearly a major factor for KVM and Linux, as well as for Hyper-V and Windows. Hypervisors and operating systems need to perform many of the same tasks – starting processes, managing memory, accessing devices. If the Linux comes with the hypervisor already included, and integrated, and tested, then that’s a strong reason for adopting KVMr.
The second factor is that hypervisors such as KVM which are based on an existing operating system don’t need to reinvent the wheel and can exploit the scalability, security and device support that’s already there. This is one of the reasons why KVM holds the top seven SpecVirt performance benchmarks – it’s leveraging Linux which already has the scalability.
The final factor is how suitable the hypervisor is to supporting cloud computing. The Gabriel Computing report saw a data correlation between KVM and private cloud projects, and speculated on whether there is something about KVM that makes it more amenable to driving private clouds.
We think that the scalability and high VM density provided by KVM, along with its open approach and low cost, makes it a great choice for cloud computing. This is why IBM uses KVM as the hypervisor for both its public cloud, IBM SmartCloud Enterprise, and also its largest internal private cloud, the IBM Research Compute Cloud.
Managing multiple hypervisors
Of course, the adoption of multiple hypervisors, like the prevalence of multiple operating systems, means that customers have to be able to manage the hypervisor diversity successfully.
In the early days of adoption, IT shops are likely to use the virtualization management tools most closely connected with the hypervisors – VMware’s vCenter, Microsoft’s Systems Center, and Red Hat’s Enterprise Virtualization – Management.
As the hypervisor diversity trend continues, this means having multiple management tools and multiple skill sets.
The idea of managing a mixed hypervisor environment from a single pane of glass then becomes increasingly attractive – whether from ISVs in the Open Virtualization Alliance such as Zenoss and ManageIQ, or enterprise systems management vendors such as IBM with Tivoli and IBM Systems Director VMControl.
Whatever happens, it looks like hypervisor diversity is here to stay for at least the next few years – and that promises to make for interesting times.
Program Director, Linux and Open Virtualization Strategy, IBM
Virtualization+IBM 2700039S5C Tags:  ibm kvm linux hypervisor vmcontrol virtualization 5,425 Views
Generally, when we think about new technology we tend to focus on all the advantages it adds. And, in the case of server virtualization - a technology that has been strongly embraced over the past decade as it expanded beyond the mainframe into the realm of x86 servers - the advantages are many. Virtualization is being widely embraced in the enterprise because it enables greater utilization of an existing infrastructure, flexibility in terms of reallocating resources when they are needed and where, and not incidentally, significant cost savings due to a smaller physical footprint, energy efficiency and the ability to avoid or postpone new hardware purchases.
Those are some pretty powerful advantages – no argument there. But what about the complexity that is with the need to manage physical and virtualized servers, and the increasing need to manage more than one hypervisor? That’s a compelling issue, as well – and this is where IBM Systems Director with VMControl comes in.
WHAT IS SYSTEMS DIRECTOR?
The base level of capability we call VM lifecycle management includes the ability to create or delete the virtual machines to configure it to start and stop, pause or relocate between servers, as well as all of the basic operations that get done every day at a customer site. And we have that level of support for the broadest number of hypervisors. On System x, we include that level of support for VMware ESXi as well as for KVM (Kernel-based Virtual Machine), and for Microsoft Hyper-V. We also have that level of support for PowerVM on the Power platform and z/VM on the mainframe.
Beyond this base level, IBM also offers higher level editions of VMControl that add functionality such as image management and system pools, which is the ability to combine multiple virtual machines across multiple servers and manage them as though they were a single physical entity. That advanced support is now available for PowerVM on Power Systems and for KVM on System x, and this level of advanced support for additional hypervisors is on our product roadmap.
WHY IS IT IMPORTANT TO SUPPORT A RANGE OF HYPERVISORS?
In the past, many customers would purchase both the hypervisor and the virtualization management from vendors such as VMware, but now with the choice of hypervisors, and the advances that have been made by Windows with the Hyper-V hypervisor and with Linux distributions such as Red Hat with KVM, customers are getting very good hypervisors and virtualization solutions at no extra cost “in the box” with the operating system. Since it is something that they have to pay for anyway, many customers are thinking: Why pay this additional “tax” for third-party virtualization when I am getting “good enough” hypervisor technology bundled with the operating system?
With Windows DataCenter Edition clients get Hyper-V and can have an unlimited numbers of Windows guests and with the equivalent version of Red Hat Enterprise Linux they get KVM and can have unlimited Linux guests for no additional cost. As a result, they are not removing VMware, but as they deploy new servers they are choosing not to put VMware on everything. For systems that are targeted primarily for Linux workloads, clients often choose Red Hat Enterprise Linux since they get KVM for no additional cost, and with IBM Systems Director VMControl, we provide a way to manage the KVM hypervisor that comes with Red Hat Enterprise Linux 6.2.
MANAGING PHYSICAL AND VIRTUAL RESOURCES THROUGH ONE PANE OF GLASS
The transition to cloud computing blurs the lines between administrators and users, with workload provisioning being delegated to end users and consumption of IT resources shifting to a ‘pay as you go’ model. Likewise, administrators are having to broaden their skill sets beyond a single type of resource (such as servers, networks or storage) and become multi-skilled in order to support cloud infrastructures requiring pooled resources. IBM Systems Director is rapidly evolving to support the increasingly sophisticated demands of this next generation of administrator.