KVM is well known as a Hypervisor for Linux, and its
integration with the Linux kernel and inclusion in the Linux build tree makes
KVM a natural choice for Linux.
But why would anyone use KVM as a Hypervisor for Windows ? Isn't that counter-intuitive, and wouldn't Hyper-V or VMware be a more natural choice ?
Think again. IBM uses KVM as the Hypervisor for its "IBM SmartCloud Enterprise" public cloud - for both Linux and Windows instances. And you might want to do the same. To understand why, we need to dig deeper on what a hypervisor needs to do - and how hypervisors relate to operating system kernels.
Fundamentally, a hypervisor needs to create and run virtual machines, allocate and manage memory, protect different virtual machines from trampling over each other, share the processor(s) between different virtual machines, and interface to the hardware devices. Yes, of course there's a lot more complexity in doing this, and especially in doing so efficiently, but the hypervisor is in effect a mini operating system - without the complexity of the graphical user interface, command line utilities and so on.
What KVM does is it plugs into an existing operating system - Linux - and turns it into a standalone hypervisor. Yes, a hypervisor which runs on the bare metal and uses the hardware virtualization support included in recent x86 processors.
KVM then uses the existing capabilities of Linux to allocate memory, provide security, and schedule the processor(s) to give the right amount of time to each virtual machine. It doesn't need to reinvent the wheel - Linux already provides these functions, and does so very well.
And then you can take this combination of KVM and Linux, remove the operating system code you don't need, and you end up with an optimized, efficient, standalone hypervisor. Red Hat have done just this to produce RHEV-H, or Red Hat Enterprise Virtualization - Hypervisor.
Now you've got your standalone hypervisor - why would you use it for virtualizing Windows ? Here's three reasons for starters
* KVM offers a lightweight, high performance, low cost hypervisor
* KVM can support both Linux and Windows guests, so providing a common hypervisor for mixed environments
* KVM can use the advanced security included in Linux through SELinux to provide mandatory access control protection between virtual machines
So next time you think about KVM, don't think about it as just a hypervisor for Linux. Think of KVM as a standalone hypervisor for both Linux and Windows.
Program Director - Cross-IBM Linux and Open Virtualization Strategy
Blog Authors: IBM Software Defined 2700052JD4 Virtualization+IBM 2700039S5C Nitin_Gaur 12000056JB Jean Staten Healy 2700025BBU John_Foley 0600026N82 SamVanAlstyne 110000DM6B alicia_wood 270003DW0M Virtualization combined with Integrated Service Management helps you use your resources effectively, manage your infrastructures efficiently and gain the flexibility to meet ever changing business demands. This blog is for the open exchange of ideas relating to virtualization across the entire infrastructure. Articles written by IBM's virtualization experts serve as conversation starters. Topics can range from latest technologies for server consolidation and tools for simplified systems management and monitoring to automating IT systems to respond to changing business conditions and cloud-based solutions for the "virtual" enterprise.
Virtualization+IBM 2700039S5C 9,813 Visits
KVM is well known as a Hypervisor for Linux, and its
integration with the Linux kernel and inclusion in the Linux build tree makes
KVM a natural choice for Linux.
Virtualization+IBM 2700039S5C 2,030 Visits
(Originally posted on IBM Smarter Computing tumblr blog)
As we enter an era of Smarter Computing, IT organizations are facing exploding demand. Data is more than doubling every two years and new services with greater quality are requested. All this, on budgets that on average grow less than one percent per year.
As IT organizations learn how to do more with less, virtualizing servers, storage and networks can help them achieve a simpler, more scalable and cost-efficient IT infrastructure. Proper management of the virtualized infrastructure also improves the speed of deployment of new services. The road to improved business agility has four distinct stages that range from securing IT efficiency in the consolidation stage, to gaining business effectiveness in the optimize stage.
Companies frequently start by virtualizing servers. This can deliver immediate benefits from lower capital expense and reduced energy costs: For example, Edith Cowan University in Australia consolidated a large, distributed, older infrastructure of systems and storage into an end to end, cost effective solution using virtualization on IBM System x. They reduced their physical server count from 600 to 100, achieved significant savings in power and cooling, and freed up administrator time for higher value projects.
Further benefits are available by using IBM Systems Director to manage physical and virtual resources across the entire IBM Systems portfolio (System x, Power, System z, storage, networking) and across multiple virtualization environments (KVM, VMware, etc.). Companies who have implemented Systems Director achieve important savings such as reducing server management costs by up to 34 percent. And using additional tools from IBM Tivoli, IT administrators can deploy new workloads and services more rapidly across IBM and non-IBM environments.
The virtualization journey offers a solid foundation for cloud computing. Clients like China Telecom Jiangxi (.pdf) rely on IBM’s virtualization solutions and expertise to achieve the flexibility and economic benefits of Smarter Computing. Using IBM Power servers, IBM PowerVM and IBM Systems Director VMControl, China Telecom Jiangxi created cloud landscapes and managed pools of virtual systems. They used the IBM SAN Volume Controller (SVC) to virtualize and manage storage. With this IBM solution, they reduced time to market for new offerings from months to days, improved utilization, cut hardware costs by over 50 percent, and reduced power requirements and CO2 emissions.
IBM also provides clients choice, by supporting open source virtualization technologies such as KVM that are cost effective, and offer enterprise-class performance and scalability. In May, IBM helped found the Open Virtualization Alliance, an industry consortium focused on driving market adoption of KVM and fostering an ecosystem of KVM based solutions. Since then, more than 170 members have joined, many of them virtualization, datacenter and Cloud solution providers. This fast pace of enrollment illustrates the excitement we see in the industry around KVM, and the customer demand for an open alternative in virtualization.
IBM’s virtualization solutions are a critical factor of Smarter Computing and the foundation for cloud computing, helping to improve business agility and staff productivity. IBM consistently demonstrates the economic benefits of virtualization on our range of server and storage platforms, and with that addresses the biggest challenges that CIO’s and IT architects face today.
Virtualization+IBM 2700039S5C 2,138 Visits
The IBM Research Compute Cloud (RC2) is a private cloud for internal IBM use that currently hosts over 2000 running VM's. Over a year ago, we changed RC2 to primarily utilize KVM for its virtualization. We had to convert most of our existing RHEL base images and user-images that were used in Xen VM's into a KVM-compatible format. We were able to automate that conversion reliably off-line using "chroot" and loop-mount based techniques to install non-Xen kernels and update the grub configuration inside the images. Our switch to KVM enabled us to support a much wider range of Linux distributions because the native IO and virtual IO emulation built into KVM just worked with Linux distributions without complications. The upcoming version 3 of RC2 is still using KVM, but is using a beta version of the IBM Tivoli Virtual Deployment Manager as the back end deployment engine instead of Tivoli Provisioning Manager workflows. Both of these deployment engines still leverage libvirt as a way to manage the definition and life-cycle of the KVM-based VM's.
IBM Software Defined 2700052JD4 1,293 Visits
This summer IBM shared plans to extend support for Kernel-based Virtual Machine (KVM) technology to the Power Systems portfolio of server products. On the surface, the announcement sounds simple enough. But like many of IBM’s initiatives, there is a substantial behind-the-scenes effort going on in an open source community to enable this innovation. In this case, much of the work is being done in the QEMU community.
What is the significance of QEMU?
QEMU stands for Quick EMUlator. It is maintained by an open source community. As the name implies, it started out as an emulator. It includes a virtual machine environment for several architectures – x86, Power, System 390, among others. However, KVM doesn’t use the processor emulator part for KVM – it just uses the virtual machine environment.
Although QEMU does not get as much attention as KVM, the technology is critical to the open source virtualization that KVM enables. The QEMU project is strategic to KVM. You can’t have a hypervisor without a virtual machine environment within which to run the operating system.
A hypervisor is comprised of the virtual machine monitor, which enforces isolation among running workloads, and the virtual machine environment, which provides the virtual hardware. For the KVM hypervisor the KVM kernel module provides the virtual machine monitor, while QEMU provides the virtual machine environment These are two open source projects that are combined to create the full hypervisor.
Some of the reasons the QEMU project is important to IBM are the same as for KVM. It is an open source project with a strong community. It moves quickly to implement new features, enabling us to bring innovation to IBM customers. In fact, our recent work on KVM for Power last year put us into a tie with Red Hat for contributions to the QEMU community.
What does QEMU enable?
Most of the new features that people are seeking in the KVM hypervisor are actually provided by QEMU components. Think about this:
In fact, most of the KVM tuning and ease-of-use features that are scheduled for release over the next year have also been developed within QEMU. In addition, most of the features that are being developed to make KVM more scalable and faster have also involved both a QEMU component and a KVM component.
IBM support for QEMU
When you implement a new feature in KVM, frequently it is necessary to implement a counterpart in QEMU to take advantage of that new KVM capability. As a result, there tends to be a large overlap in the developers that are working in KVM and QEMU.
IBM is committed to supporting QEMU development, and is investing many developer hours into the project. Over the years, IBM has participated in many open source projects, including Open Stack, , Apache and Eclipse in addition to Linux. We are approaching the QEMU project with the same intensity.
Mike Day - IBM Distinguished Engineer and Chief Virtualization Architect, Open Systems
IBM Software Defined 2700052JD4 4,207 Visits
A behind-the-scenes peek at the most-attended sporting event in the world
When the U.S. Open kicks off on August 26 it will draw millions of tennis enthusiasts from all over the world for two intense weeks of non-stop world-class tennis action. Fans will watch events unfold not only at the USTA Billie Jean King National Tennis Center in Flushing Meadows, New York, but also through an integrated online, mobile and social experience delivering real-time play-by-play action, live video streaming and live match scores and statistics, ensuring that every fan experiences the thrill and excitement on center court. I am proud to say that for more than 20 years, the US Open is relying on IBM as the end-to-end IT provider to enable and deliver this interactive experience though a myriad of fan-facing technologies.
Understanding that it is not possible for all tennis fans to make it to New York for the matches, it is a priority for the USTA (United States Tennis Association), the governing board for tennis in the United States, to provide content and information to them any way they want at any time of day or night. To support the USTA’s goal, IBM delivers the US Open experience to fans through the digital platform, and maintains uninterrupted availability of service throughout the event.
The capabilities provided to US Open tennis fans continue to expand. To name a few, there is the popular SlamTracker application that provides real-time scoring to fans for all matches. IBM’s “Keys to the Match” analysis is built into the SlamTracker application. The Keys’ are generated by using IBM predictive analytics software to analyze over 41 million data points from the past eight years of Grand Slam data for all men’s and women’s matches. This feature helps fans understand the important things a player must do to increase the likelihood of winning a match. And, mobile support has been expanded to include iPads as well as iPhone and Android devices. Fans that are physically at an event or watching the US Open on television at home often want access to digital information, to join the conversation on social media and to achieve a greater sense of control. The US Open “second screen” experience enables more fan interaction.
Social Media Insights Enhance Fan Experience and System Availability
IBM’s analysis at the Open has expanded to encompass social media. This helps to determine the most popular players, and aids IBM in ascertaining - as play is in progress - the matches that are likely to have the greatest fan traffic.
Behind the scenes, we are using IBM analytics to predict, allocate and monitor capacity in the Cloud. By analyzing tournament, player and social data, the system continuously predicts traffic expected to the Web site and automatically allocates or deallocates the appropriate resources. Applying predictive analytics in the cloud enhances the online user experience. Since we can optimize projections in order to add or reduce capacity, everyone can have an optimal experience. It also helps save dollars since allocating capacity only when its needed means servers aren’t sitting idle when they are not needed. All of our systems are able to generate highly accurate forecasts of how much traffic to expect, but we also look at our log history and social media discussions to figure out if there is spike in interest around a certain match that may translate into a rapid need for additional capacity.
A great example of the insight social media can provide is the Australian Open in 2012 when a match between Novak Djokovic and Rafael Nadal went on for nearly six hours. As it continued, it became clear that this was one of the longest finals matches in Grand Slam history. Once that happened, people started tweeting about it, and social media discussions sprang up. This drove additional traffic to the website because people wanted to witness it themselves.
Elastic Capacity Enabled by IBM and Open Source Technologies
The elastic capacity of the IT infrastructure supporting the US Open is made possible by the IBM SmartCloud technologies. This enables fast creation and dynamic allocation of resources transparently to users, while also supporting real time access. The US Open cloud environment includes virtualized IBM servers, software and storage across the globe, and supports the continuous availability and scalability required. The capabilities provided by IBM Monitoring optimize workloads and provide visibility necessary to allocate resources and more intelligently plan future growth.
Like most big shops, we are not homogeneous. We rely on our own IBM technologies and open source. Real-time and historical data analytics is enabled by IBM Smarter Analytics which is a combination of products including IBM InfoSphere BigInsights built on top of Apache Hadoop and IBM InfoSphere Streams. We also use a variety of servers, including both IBM Power Systems and IBM System x, for our cloud. And, we use a mix of operating systems. We rely on Red Hat Enterprise Linux, SUSE Linux Enterprise Server and AIX. We have different capabilities that we must support and which require different operating systems. Cloud enables us to do that very easily. We also use the KVM (Kernel-based Virtual Machine) hypervisor to manage our virtual machines on System x – and IBM has just announced that KVM on Power will soon become available.
The reality is that each platform has its own attributes and that is why we include them in the mix. For example, Power‘s Logical Partitioning (LPAR) divides a server’s resources into virtual “logical” partitions, and we continually take advantage of the LPAR mobility aspect of Power Systems. Power allows us to migrate live workloads from one physical frame to another without any impact. If we have a failure on one of our machines, we can do what we call “frame evacuation,” and move all the running servers including the databases to another machine, then make a repair, and move them back. You can do this on the fly in the middle of the day, in the middle of a peak match, without any impact to the business and, for us, that is critical.
The good news is that when the US Open happens in Flushing Meadows starting this week, tennis fans will have the highest quality experience possible, whether they are at the Tennis Center in person, or monitoring the matches online. Even better news is that all of these technologies can be applied to a wide range of use cases across many industries and are available today.
You can learn more at: http://www.ibm.com/usopen
Software Engineer and Master Inventor, IBM
IBM Software Defined 2700052JD4 1,055 Visits
Last week, IBM was the premier sponsor of the Red Hat Summit in Boston for the ninth year in a row. This conference is a highlight for me each year because it gives both companies the opportunity to showcase the joint solutions we deliver to our clients, hear what mutual customers have to say first-hand, and provide a peak at what will be coming in the year ahead. There is always a lot of energy at the Red Hat Summit spurred by thought-provoking presentations and the unveiling of major innovations. This year was no exception.
Kicking off IBM’s participation in the Red Hat Summit, Arvind Krishna, GM Development and Manufacturing, IBM STG, delivered a keynote in which he announced new IBM initiatives to further support and speed up the adoption of the Linux operating system across the enterprise. Arvind told attendees that IBM is opening two new Power Systems Linux Centers in Austin, Texas, and New York in addition to the Power Systems Linux Center launched in Beijing in May. Arvind also spoke about IBM’s plans to extend support for Kernel-based Virtual Machine (KVM) technology to the Power Systems portfolio of server products – giving IBM Power customers an open choice. The new centers will make it easier for developers to build new applications for big data, cloud, mobile and social business computing using Linux – and in the future, KVM – with the latest IBM POWER 7+ processor technology. Signifying the importance of these announcements, the news was covered widely in the news media, including Forbes' DividendChannel, ZDNet, eWeek, Linux and Life, Computer Business Review, and The Register.
At the Summit, Red Hat, announced the global availability of Red Hat Enterprise Virtualization 3.2, which builds on the industry-leading performance of the KVM hypervisor to offer an enterprise-class data center virtualization and management solution, with fully supported Storage Live Migration and a new third-party plug-in framework. Red Hat also announced that IBM is joining the Red Hat OpenStack Cloud Infrastructure Partner Network, the availability of the new Red Hat OpenStack Certification, and the launch of the Red Hat Certified Solution Marketplace. The Red Hat Certified Solution Marketplace already includes more than 500 products that have been certified as OpenStack compute (Nova) compatible, from technology leaders – including IBM. IBM’s collaboration with Red Hat and the OpenStack ecosystem is in line with our commitment to give clients the flexibility, cost-effectiveness, and security that is necessary for cloud computing – both now and in the future.
It was clear at the Summit that cloud is on our customers’ roadmaps. Both IBM and Red Hat understand the importance of the cloud and the critical role that Linux and KVM play in the cloud. Whether it is private, public, or hybrid, we know customers have to virtualize to get there – and both IBM and Red Hat are committed to KVM as the virtualization hypervisor.
There were many other high points at this year’s conference as well. In our booth, IBM profiled technology from IBM PureSystems, IBM System x, IBM BladeCenter, IBM Power Systems, and IBM System z, and demonstrated the latest IBM solutions for cloud computing, open virtualization with KVM, and big data. I also had the opportunity to moderate a panel discussion in which representatives from IBM, Red Hat, and the University of Connecticut participated. The discussion focused on common Red Hat Enterprise Virtualization, KVM, and OpenStack use cases and the business benefits that are being realized. I was pleased to see a packed room with the audience asking many more technical questions about KVM than in prior years.
As I left the conference this year, I was struck by the thought that something was very different. Whether customers are discussing the use of KVM in the cloud, or adding it as a second hypervisor for “hyperdiversity,” the debate about whether KVM is technically ready is now over. It has achieved impressive SPECvirt and TPC-C benchmarks, security certifications, and according to IDC, is showing impressive growth in unit shipments. We are no longer explaining what KVM is. Instead, this year, we were able to show a robust portfolio of clients that have realized success with KVM. The conversation around KVM has changed.
Jean Staten Healy - Director, Worldwide Linux and Open Virtualization, IBM
IBM Software Defined 2700052JD4 4,114 Visits
Just a few years ago, many enterprise customers predicted they would never use cloud computing because it was too risky. Fast forward, and today the picture is a stark contrast. Compelling economic advantages have trumped all other concerns. Worldwide revenue from public IT cloud services, which exceeded $21.5 billion in 2010, will skyrocket to $72.9 billion in 2015, representing a compound annual growth rate of 27.6% - four times the projected growth for the worldwide IT market as a whole, according to IDC cloud research.
Once that initial leap to the cloud has been made, what else do organizations look for? It is clear that they want a choice of hypervisor technologies for their cloud deployments – including open source options such as KVM (Kernel-based Virtual Machine). According to a recent IDC white paper, “KVM: Open Virtualization Becomes Enterprise Grade,” cloud providers are embracing KVM. Many prominent public clouds are built on KVM, including the Google Compute Engine, HP Cloud, and IBM SmartCloud Enterprise. KVM has also become the unofficial reference standard for OpenStack, and is the choice of over 95% of OpenStack clouds, the IDC paper reports.
Beyond service providers, organizations that are deploying private clouds are also more amenable now to using a new hypervisor. This is the result of hypervisor technology being increasingly viewed as offering a range of enterprise-grade alternatives. The IDC white paper points out that, when asked in a survey which hypervisor they would prefer to use with their private cloud system, more than half of respondents said they would like to use a new hypervisor rather than the existing one. In addition, IDC says that when choosing the second hypervisor, companies are equally likely to choose an open source solution as a proprietary one, a result of maturation of open source technologies.
Why do organizations choose KVM for the cloud?
Cost – For anyone deploying cloud services, but particularly for cloud service providers which are competing for business, the ability to provide a high level of service while keeping infrastructure costs down is critical. For example, DutchCloud, a cloud service provider, has found that using IBM SmartCloud Provisioning enables it to bring in customer environments on VMware and reduce costs by moving them to KVM. Not only is KVM affordable, but for organizations that are already using Linux servers, KVM is already included in the main enterprise Linux distributions.
Flexible tooling – Since there is no single management infrastructure that must be used, KVM enables choice in terms of cloud and virtualization management. Companies can build their own toolset, or they can use a variety of products, including OpenStack, as well as IBM products such as SmartCloud Provisioning and SmartCloud Orchestrator which support KVM. Solutions that support multiple hypervisors enable KVM to easily be added to the mix to take advantage of its lower costs.
Scalability and fast provisioning – KVM can pack virtual machines very densely on a host, as demonstrated in a recent SPECvirt benchmark, resulting in great efficiency. KVM also uses thin provisioning, which means that the guest image file is compressed, so only a portion of the file is transferred over the network to the host machine. This enables organizations to start up the guest quickly, an important consideration for cloud deployment.
Security – KVM benefits from SELinux, which enables it to provide Mandatory Access Control and enforced isolation of virtual machines. Proving the high level of security provided by SELinux and KVM and setting the stage for broader enterprise adoption, Red Hat and SUSE enterprise Linux distributions with KVM have achieved Common Criteria Certificates at EAL 4+.
Today, because of these compelling advantages, many of our clients are choosing KVM, both for public clouds and private clouds.
Jean Staten Healy
Director, Worldwide Linux and Open Virtualization, IBM
Virtualization+IBM 2700039S5C 1,703 Visits
IBM SmartCloud Provisioning is an workload optimized cloud which combines infrastructure and platform capabilities that allows quick cloud deployment – and support for KVM and multiple hypervisors helps keep costs under control.
Requirements for public and private cloud provisioning have similarities, but there are also key differences. All cloud providers, whether private or public, are concerned with availability and security. But public cloud providers have the added requirement to remain flexible to meet a wide range of customer deployment needs, while at the same time, keeping a firm grip on costs both to remain competitive as well as to ensure their own profitability. IBM SmartCloud Provisioning which was designed specifically in terms of infrastructure-as-a-service can play a role in all of those areas and provide additional capabilities with rapid composite application deployments.
Rather than requiring service providers to build a cloud from scratch using virtualization management tools, IBM SmartCloud Provisioning offers a high-scale, low-touch cloud provisioning system. It is a hypervisor-agnostic, infrastructure-as-a-service solution enabling fast, automated cloud provisioning, parallel scalability, integrated fault tolerance and a foundation for more advanced cloud capabilities. In addition, the private cloud environment offers near-zero downtime and automated recovery from hardware and software failures across heterogeneous platforms.
Support for Open Standards and Hypervisor-Agnostic
While IBM SmartCloud Provisioning was originally built on top of KVM (Kernel-based Virtualization Machine), support has been expanded to include VMWare ESXi, vCenter, PowerVM, HyperV and Xen as well. Support for multiple hypervisors is where we think the industry is going, and the benefit of KVM support in the mix is revealed when you look at the needs of the cloud providers.
Cloud providers are on very tight budgets and they will succeed in terms of selling their services only if they are able to provide IT services to customers at a lower cost than the customers could provide for themselves, so it is very cost-competitive. In order for the cloud service providers to make a profit, the cost of the underlying infrastructure is really important. And then, to retain cloud customers, the reliability, speed and scalability are also very important.
For example, Dutch Cloud is a leading ISP based in the Netherlands, focused on SME customers in a few key industries including healthcare and electronics. It provides a range of cloud based services – from fully managed IaaS through to disaster recovery solutions. Customers select DutchCloud for the quality of service delivered and its service assurance.
Dutch Cloud wanted to improve the delivery of its cloud services in terms of cost, speed, and agility, and minimize administration, as well as scale delivery costs to business volumes. Since implementing SmartCloud Provisioning, Dutch Cloud has been able to deploy new services in seconds rather than hours, and has even deployed hundreds of new VM instances in minutes. Adding the cost efficiency, Dutch Cloud has also been able to move a number of its customers from proprietary hypervisors to the more affordable KVM.
Because SmartCloud Provisioning is hypervisor-agnostic, you can match it with a range of hypervisors including VMWare ESXi, vCenter, PowerVM, HyperV and Xen. There are obviously going to be times when a client indicates a preference for a particular hypervisor. But when there is no specific preference and service is all that matters, then from the cloud provider’s point of view the decision plays out this way: If you have got equivalent capabilities in terms of hypervisor, and equivalents in terms of the virtualization management – because IBM SmartCloud Provisioning is available across a range of virtualization technologies – then it comes down to cost, and KVM wins there hands-down. SmartCloud Provisioning’s multi-hypervisor support enables the provider to offer a range of virtualization options without locking the customer in, and because KVM is lower cost than proprietary alternative, it opens up a level of affordability that would not be possible otherwise.
And in terms of security, for public sector customers in particular, KVM’s Common Criteria Certification at Evaluation Assurance Level 4+ (EAL4+) is significant. It means that, like other hypervisors, the KVM hypervisor on Red Hat Enterprise Linux and IBM x86 servers now meets government security standards allowing open virtualization to be used in homeland security projects, command-and-control operations, and throughout government agencies that previously were limited to proprietary virtualization technologies.
KVM also goes beyond competitors in terms of security with SELinux or Security-Enhanced Linux which provides much greater protection and isolation between virtual machines, and enables mandatory access control as opposed to just discretionary access control. With discretionary access control, permissions are based on a user’s role, whereas with mandatory access control a user has to be specifically authorized in order to access a particular resource. This means that if a virtual machine goes wrong and attempts to impersonate someone with a high role, then it can get around discretionary access control. But with mandatory access control, if a virtual machine goes wrong, it doesn’t matter because it still does not have the permission – so it is very important in terms of military-grade security which is why SELinux was actually developed by the National Security Agency. And, because KVM is based on top of Linux it is then able to use that for the virtual machines.
The Bottom Line for Cloud Providers
There are several things that cloud service providers have to consider in terms of provisioning a cloud. The first is the cost of software, second is the level of virtual machine density that can be achieved – in other words, on a particular piece of hardware how many virtual machines can go on that piece of hardware and still maintain a good quality of service because obviously the more virtual machines on the hardware, the lower the unit costs. And then, it is about what is the overall quality of service that is provided in terms of reliability, and performance, and finally, it is about management. What IBM SmartCloud Provisioning is all about is this: How do you provision high numbers of clouds very quickly with lots of instances of virtual machines on clouds with minimal need for administration – achieving maximum automation, maximum self-healing, and maximum detection of failures and recovery from failures.
Test drive a full version of IBM SmartCloud Provisioning with the 60-day no charge trial
Jean Staten Healy
Director, Worldwide Linus and Open Virtualization, IBM
Virtualization+IBM 2700039S5C Tags:  hypervisor ibm adam systemsdirector vmcontrol gabriel kvm hyperversity linux jollans consulting 3,910 Visits
Last month I blogged about the surprising level of hypervisor diversity that we’re seeing in use by customers – as shown by a report published by Gabriel Consulting and based on a survey of hundreds of IT professionals.
Now I want to discuss what’s behind this – why are so many customers mixing x86 hypervisors, and what are their reasons?In essence, it comes down to three factors – lower cost, technical differences, and customer ability to manage multiple hypervisors.
You can read more about what’s driving hypervisor diversity in the second new Gabriel Consulting Report ‘Hyperversity’ Driven by Technical & Cost Differences.
Cost differences matter
The first and most obvious factor is cost. We’re seeing the familiar cycle of high-priced proprietary technologies being challenged by lower-cost open source innovation – the same situation that played out with Linux, Eclipse and Apache.
Although the Gabriel Consulting report shows that customers value proprietary hypervisor technology, it also shows that the costs of implementing this everywhere can often be too high and half of the respondents agreed with the statement “Cost issues make standardizing on one suite too expensive…”
We’re also hearing this from our customers, from large banks to cloud providers. Cost is one of the main reasons they’re adopting KVM.
But according to the report, while cost is a driver for hyperversity, it isn’t the major driver. Intriguingly, that’s technical factors.
Technical differences matter even more
71% of respondents agreed with the statement “Technical differences between various solutions drive hypervisor diversity”.
The first and most obvious factor behind this is affinity between the hypervisor and the operating system. This is clearly a major factor for KVM and Linux, as well as for Hyper-V and Windows. Hypervisors and operating systems need to perform many of the same tasks – starting processes, managing memory, accessing devices. If the Linux comes with the hypervisor already included, and integrated, and tested, then that’s a strong reason for adopting KVMr.
The second factor is that hypervisors such as KVM which are based on an existing operating system don’t need to reinvent the wheel and can exploit the scalability, security and device support that’s already there. This is one of the reasons why KVM holds the top seven SpecVirt performance benchmarks – it’s leveraging Linux which already has the scalability.
The final factor is how suitable the hypervisor is to supporting cloud computing. The Gabriel Computing report saw a data correlation between KVM and private cloud projects, and speculated on whether there is something about KVM that makes it more amenable to driving private clouds.
We think that the scalability and high VM density provided by KVM, along with its open approach and low cost, makes it a great choice for cloud computing. This is why IBM uses KVM as the hypervisor for both its public cloud, IBM SmartCloud Enterprise, and also its largest internal private cloud, the IBM Research Compute Cloud.
Managing multiple hypervisors
Of course, the adoption of multiple hypervisors, like the prevalence of multiple operating systems, means that customers have to be able to manage the hypervisor diversity successfully.
In the early days of adoption, IT shops are likely to use the virtualization management tools most closely connected with the hypervisors – VMware’s vCenter, Microsoft’s Systems Center, and Red Hat’s Enterprise Virtualization – Management.
As the hypervisor diversity trend continues, this means having multiple management tools and multiple skill sets.
The idea of managing a mixed hypervisor environment from a single pane of glass then becomes increasingly attractive – whether from ISVs in the Open Virtualization Alliance such as Zenoss and ManageIQ, or enterprise systems management vendors such as IBM with Tivoli and IBM Systems Director VMControl.
Whatever happens, it looks like hypervisor diversity is here to stay for at least the next few years – and that promises to make for interesting times.
Program Director, Linux and Open Virtualization Strategy, IBM
Virtualization+IBM 2700039S5C Tags:  ibm kvm linux hypervisor vmcontrol virtualization 3,116 Visits
Generally, when we think about new technology we tend to focus on all the advantages it adds. And, in the case of server virtualization - a technology that has been strongly embraced over the past decade as it expanded beyond the mainframe into the realm of x86 servers - the advantages are many. Virtualization is being widely embraced in the enterprise because it enables greater utilization of an existing infrastructure, flexibility in terms of reallocating resources when they are needed and where, and not incidentally, significant cost savings due to a smaller physical footprint, energy efficiency and the ability to avoid or postpone new hardware purchases.
Those are some pretty powerful advantages – no argument there. But what about the complexity that is with the need to manage physical and virtualized servers, and the increasing need to manage more than one hypervisor? That’s a compelling issue, as well – and this is where IBM Systems Director with VMControl comes in.
WHAT IS SYSTEMS DIRECTOR?
The base level of capability we call VM lifecycle management includes the ability to create or delete the virtual machines to configure it to start and stop, pause or relocate between servers, as well as all of the basic operations that get done every day at a customer site. And we have that level of support for the broadest number of hypervisors. On System x, we include that level of support for VMware ESXi as well as for KVM (Kernel-based Virtual Machine), and for Microsoft Hyper-V. We also have that level of support for PowerVM on the Power platform and z/VM on the mainframe.
Beyond this base level, IBM also offers higher level editions of VMControl that add functionality such as image management and system pools, which is the ability to combine multiple virtual machines across multiple servers and manage them as though they were a single physical entity. That advanced support is now available for PowerVM on Power Systems and for KVM on System x, and this level of advanced support for additional hypervisors is on our product roadmap.
WHY IS IT IMPORTANT TO SUPPORT A RANGE OF HYPERVISORS?
In the past, many customers would purchase both the hypervisor and the virtualization management from vendors such as VMware, but now with the choice of hypervisors, and the advances that have been made by Windows with the Hyper-V hypervisor and with Linux distributions such as Red Hat with KVM, customers are getting very good hypervisors and virtualization solutions at no extra cost “in the box” with the operating system. Since it is something that they have to pay for anyway, many customers are thinking: Why pay this additional “tax” for third-party virtualization when I am getting “good enough” hypervisor technology bundled with the operating system?
With Windows DataCenter Edition clients get Hyper-V and can have an unlimited numbers of Windows guests and with the equivalent version of Red Hat Enterprise Linux they get KVM and can have unlimited Linux guests for no additional cost. As a result, they are not removing VMware, but as they deploy new servers they are choosing not to put VMware on everything. For systems that are targeted primarily for Linux workloads, clients often choose Red Hat Enterprise Linux since they get KVM for no additional cost, and with IBM Systems Director VMControl, we provide a way to manage the KVM hypervisor that comes with Red Hat Enterprise Linux 6.2.
MANAGING PHYSICAL AND VIRTUAL RESOURCES THROUGH ONE PANE OF GLASS
The transition to cloud computing blurs the lines between administrators and users, with workload provisioning being delegated to end users and consumption of IT resources shifting to a ‘pay as you go’ model. Likewise, administrators are having to broaden their skill sets beyond a single type of resource (such as servers, networks or storage) and become multi-skilled in order to support cloud infrastructures requiring pooled resources. IBM Systems Director is rapidly evolving to support the increasingly sophisticated demands of this next generation of administrator.
Virtualization+IBM 2700039S5C Tags:  myths kvm x86 ibm hypervisor virtualization linux 7,674 Visits
KVM (Kernel-based Virtual Machine) is gaining traction in the enterprise as a virtualization solution that provides high performance, scalability, and cost efficiency. But misconceptions still abound about this open source hypervisor. Some falsehoods continue to be perpetuated by organizations offering competing products, and others because KVM is maturing quickly and the up-to-date, correct information is not yet widely known. Here, we tackle some of the most persistent myths about KVM - because it’s time to set the record straight.
Myth #1: KVM is type 2 hypervisor that is hosted by the operating system, and isn’t a bare metal hypervisor.
On x86 hardware, KVM relies on the hardware virtualization instructions that have been in these processors for seven years. Using these instructions the hypervisor and each of its guest virtual machines run directly on the bare metal, and most of the resource translations are performed by the hardware. This fits the traditional definition of a “Type 1,” or bare metal hypervisor.
You can also get KVM packaged as a standalone hypervisor - just like VMware ESX is packaged - but initially KVM was not available in that package. One way of doing this is with Red Hat Enterprise Virtualization (RHEV).
Myth #2: KVM only runs Linux workloads.
Myth #3: KVM is only available for x86 platforms.
Myth #4: KVM is only available from Red Hat.
Myth #5: KVM is only available as part of enterprise Linux distributions.
Myth #6: KVM is not secure.
But the myth about security persists because of the fact that KVM is based on Linux - and that has a whole bunch of baggage with it. There are several reasons for this.
One is that some people think that open source code is not secure because people can audit the code and find security entry points and potential bugs where they can crack the code and escalate into a security issue. However, auditing source code has an overwhelming benefit to security. When more people audit code, that code becomes more secure. When you use proprietary hypervisor technology with closed source code, you never get to review that code so you have no idea what has been audited for security and what hasn’t. And, furthermore, anybody with a disassembler can disassemble the binary image and start looking at the assembly code to find security holes.
The second reason people say it is not secure is that when KVM is packaged as part of an enterprise Linux distribution, the distribution can include additional components such as an HTTP server, more than one shell, programming languages such as Perl and Python, and almost too many tools to mention. In this case, you have to take the Linux distribution - even if it is an enterprise Linux distribution - and spend some time to lock it down yourself or get something like RHEV-H which is a much smaller component and it is locked down by default.
The bottom line is that KVM is not necessarily insecure because it is based on enterprise Linux, but you might want to remove some packages that might have some issues in a Linux distribution – or simply get the RHEV-H version.
Myth #7: There are no virtualization management tools available for KVM.
For more information about KVM, access a new white paper, “KVM: The Rise of Open Enterprise-Class Virtualization,” from the Open Virtualization Alliance, an organization founded to promote awareness and adoption of KVM.
Mike Day, Distinguished Engineer and Chief Virtualization Architect, Open Systems Development, IBM
Virtualization+IBM 2700039S5C Tags:  virtualization ibm summit linux ova kvm hat red 2,964 Visits
At Red Hat Summit 2012, long-time partners IBM and Red Hat showcase cost-effective, open source solutions to help “Transform Your IT”
At IBM, we know you can’t have open source without choice, and there is no choice without committed partners and a rich ecosystem.
As we look forward to Red Hat Summit 2012, and Red Hat celebrates the 10-year anniversary of Red Hat Enterprise Linux, I am so pleased that IBM will be the Premier Sponsor of the event taking place on June 26 - 29 in Boston at the Hynes Convention Center.
In the late 1990s, IBM made the decision to fully support Linux, and formed a partnership with Red Hat that has only strengthened over time. While Linux is now considered a mainstay of enterprise IT, it was the introduction of Red Hat Enterprise Linux as well as IBM’s commitment to the open source operating system in the enterprise that were major factors in Linux being taken seriously for business. Offering full support capabilities and ready to run in an enterprise environment, Red Hat Enterprise Linux, introduced 10 years ago, was the first real commercial, business-focused release. And IBM was right there with Red Hat. Basically, Red Hat runs on IBM, and IBM runs on Red Hat.
From the beginning, IBM and Red Hat shared a vision for Linux in the enterprise and we have furthered that vision by maintaining complementary interests that don’t overlap or compete with one another in major areas. By staying independent of one another, but working jointly, we have been able to make the Linux ecosystem richer and stronger. During our 12-plus years of collaboration, IBM and Red Hat have focused on a large number of Linux projects, and a large number of joint customers around the world.
The Partnership Advances
But technology and even partnerships must grow and change to remain meaningful. And so, the IBM-Red Hat story is also moving forward from the initial focus on the open source operating system to also include advancements such as open source virtualization.
Following the emergence of x86 hardware support for virtualization in the mid-2000s and, more recently, the creation of the KVM (Kernel-based Virtualization Machine) project, Red Hat and IBM started investing development resources in the KVM open source community, to accelerate the adoption of open virtualization alternatives. IBM and Red Hat are both founding partners of the Open Virtualization Alliance, which is dedicated to fostering awareness and market adoption of open virtualization and KVM.
The launch of Red Hat Enterprise Virtualization 3.0 is a testament to our companies’ significant investment in virtualization. If you virtualize with KVM and Red Hat Enterprise Virtualization, you want to do it on IBM System x. For example, as a new joint IBM-Red Hat case study shows, LetterGen, an IT services provider in Belgium, migrated from VMware to Red Hat Enterprise Virtualization, along with Red Hat Enterprise Linux as the core operating system, and as a result, was able to reduce costs by 67%, increase utilization, optimize system maintenance and improve service levels all around.
Transform Your IT
It is apt that the theme of the upcoming Summit is “Transform Your IT” because that is just what IBM and Red Hat have been helping our joint customers do - and that is why I am really looking forward to this year’s Red Hat Summit. There will be wonderful networking opportunities for our clients and business partners to get to know one another better and great presentations from thought leaders and technical experts.
One you won't want to miss is a keynote on Tuesday, June 26, at 5:30 pm in the Main Hall, on “Open Technologies for the Next Era of Computing”. This presentation will be delivered by Robert LeBlanc, Senior Vice President, IBM Software Group. As you may know already, Robert has long advocated IBM’s participation in the open source community, and, when serving as Vice President of IBM Software Strategy, was responsible for crafting the Linux strategy for IBM.
Of course, IBM will also have a booth and there will be additional sessions covering how hardware and software solutions from IBM and Red Hat provide solutions for a Smarter Planet. The entire IBM product line is enabled for Red Hat Enterprise Linux, making it easy for businesses to reduce complexity, lower costs and take advantage of the power of open standards. Technology from IBM System z, IBM PowerLinux, IBM System x and IBM BladeCenter will be profiled, demonstrating the latest IBM solutions for cloud computing, open virtualization, and systems management. In addition, IBM will be showcasing IBM PureSystems, a new family of expert integrated systems, optimized for performance, virtualized for efficiency, designed for cloud.
I will also moderate a panel discussion on Wednesday, June 27 at 10:40 am in Room #309. Titled “KVM on IBM System x – Leverage the Ecosystem!,” we’ll talk with members of the open virtualization ecosystem about why KVM and Red Hat Enterprise Virtualization work so well on IBM System x, and about the business value your organization can get out of these solutions. Participating in the discussion will be Wes Ganeko, IBM STG North American System x Sales Executive; Dmitri Joukovski, Vice President of Product Management, Acronis; Carl Trieloff, Director Open Source and Standards, Senior Consulting Engineer, Red Hat; and Dr. Gilad Zlotkin, VP Virtualization & Management Products, Radware.
I hope you will join us at the Red Hat Summit 2012 and take part in advancing the story of open source IT innovation for the future!
You can learn more about IBM’s participation at the Summit here.
The Open Virtualization Alliance will be at Red Hat Summit in booth #2432.
If you have not registered yet for the Summit, there is still time. Register here.
We’ll also be tweeting about our activities at Red Hat Summit 2012 – so follow us on Twitter at: @OpenKVM.
Jean Staten Healy
Virtualization+IBM 2700039S5C Tags:  smartcloud virtualization faq kvm linux vmcontrol dutchcloud ibm 5,599 Visits
In the early days of Linux, it was often the technical people in organizations who knew about it and were already implementing, while management had little awareness of Linux. We are seeing that same trend occurring now with KVM. To help further the overall understanding and awareness of KVM and fill in the information gap, here is a list of the most frequently asked questions that we at IBM have encountered in recent panel discussions, conversations, and interviews.
FAQ 1: How does KVM fit in with cloud?
How enterprise-ready is KVM? IBM uses KVM in its public cloud - IBM SmartCloud Enterprise - and our own IBM Research Compute Cloud (RC2), a private cloud for internal IBM, also relies on KVM for its virtualization.
In addition, IBM customer Dutch Cloud is a cloud service provider in the Netherlands. Open standards are very important to Dutch Cloud, and being able to support both KVM and VMware hypervisors with IBM SmartCloud Provisioning enables them to offer choice to their clients. “KVM also saves us a lot of money, because of its lower licensing costs. We are using both KVM and VMware, so IBM SmartCloud Provisioning enables us to bring in customer environments on VMware and reduce costs by moving them to KVM. We’ve also found KVM much easier to install and manage,” explains Martijn van Zoeren, CEO, Dutch Cloud, in a customer case study.
For more information, go to Jean Staten Healy’s blog about KVM and the cloud, Why Open Virtualization is Important for Cloud.
FAQ 2: KVM currently has a small market share. Don’t the other hypervisors have too big a market share for KVM to have any chance of success?
According to analysts, only one in five physical servers are virtualized so far – meaning there is plenty of headroom because many clients are still making their virtualization decisions. In addition, customers are fearful of vendor lock-in with proprietary solutions. With the availability of new management tools - such as IBM Systems Director VMControl that covers both proprietary and open source virtualization technologies - heterogeneous virtualization is becoming a much more realistic choice.
Customers like choice and lower costs – as long as the alternative is still enterprise-grade. Linux and Eclipse succeeded through a combination of great technology, a compelling customer value proposition, and an open approach not dominated by any single vendor. KVM has all of these attributes
FAQ 3: What is the momentum? Is KVM following Linux? Is it sufficiently similar to Linux that it will follow that trajectory?
The market share gain for KVM is likely to be driven by users with affinity to Linux and open source - customers who feel that are paying too much for their current virtualization approach, and by service providers implementing cloud computing who want a cost-effective, secure, and scalable virtualization option.
In addition, KVM has a number of advantages compared to other hypervisors. KVM currently holds the top seven SpecVirt virtualization benchmark. These all use Intel processors and Red Hat virtualization, and are a mixture of HP and IBM systems. On the same 2-socket and 4-socket hardware, KVM delivers slightly better virtualization performance than VMware. But where KVM really excels is on scalability. Only KVM has published SpecVirt benchmarks for 8-socket, 80-core systems, since it can scale much better than VMware and support many more processors and larger memory, because it inherits the scalability of Linux. This translates to greater virtual machine density on large x86 systems, and therefore better resource sharing and lower costs.
KVM also inherits the Mandatory Access Control of SELinux (Security Enhanced Linux), and uses it to provide a very high level of security between virtual machines. This means that clients can be confident that their data and applications are fully protected, even in a multi-tenant cloud environment. Only KVM has this level of hypervisor security, which it inherits from Linux. In addition, KVM has recently been certified at EAL4+ level in the Common Criteria security certification, with RHEL 5 and IBM servers, and is currently in evaluation with RHEL 6. This gives government and other security-conscious clients the confidence that KVM has been tested at a top security level.
KVM also inherits quality of service features from Linux, including cGroups which enable resources such as processors and memory to be allocated to specific virtual machines, thus ensuring that high priority virtual machines get the resources they need.
Customers are looking for an open alternative to proprietary virtualization solutions, and KVM provides an enterprise-ready option.
FAQ 4: Where is KVM being adopted today and what are the typical use cases?
KVM is being used for cloud computing because cloud service providers, both public and private, are looking to minimize their costs – by increasing the density of virtual machines on each physical server and by reducing the software licensing costs of the hypervisor
KVM is also being used for Linux server consolidation. Linux servers are currently less virtualized that Windows servers and KVM is the natural choice, since it is already integrated as part of the leading enterprise Linux distributions from Red Hat, SUSE and Ubuntu
And, KVM is also being used by enterprises with heterogeneous virtualization. Large enterprises that are already using VMware and are comfortable using multiple hypervisors are now adding a new wave of virtualized servers. Adding KVM into their data center environments has become a much easier decision to make now that multiple hypervisors can be managed from a single management console.
5-How does KVM fit in with the OVA, oVirt, OpenStack, and the rest of the virtualization community?
The KVM community develops the base hypervisor, while oVirt develops the virtualization management software and also packages the hypervisor and management software together. KVM is also included into the Linux source code development tree, rather than being a separate add-on, so it is fully tested and integrated
Open Stack is a cloud infrastructure that is hypervisor-agnostic. OpenStack was originally developed on KVM, but it has now added multiple hypervisor support. OpenStack also uses the native hypervisor management tools – for example using oVirt to deploy, start and stop virtual machines. In addition, oVirt enables fine-grain control of KVM – to enable a work group/resource pool sized infrastructure. It also includes automation, but offers a lot more detail that you can monitor because virtual machines need more granularity in management.
6-Does KVM have vMotion or live migration?
7-How does KVM differ from Xen? Why did IBM stop working on Xen and start working on KVM?
There is a tipping point at which a disruptive technology takes off in the marketplace, displacing other technologies that may have previously been believed to be firmly entrenched. For open source solutions, this occurs when there is a compelling reason, such as cost, to switch from proprietary technologies, after it has been proven that the new technology delivers enterprise compatibility, ecosystem support, and the high quality and reliability required for enterprise adoption. We have already witnessed that tipping point with Linux - and we are seeing that same story play out now with KVM. As a result, it’s clear that 2012 is shaping up to be the break-out year for KVM.
Creating the backdrop for KVM to emerge in this leading role on the enterprise virtualization stage are two key factors. One is that there continue to be more un-virtualized servers sold than virtualized servers, as documented in a recent IDC webinar. This means that there is still an advantageous market situation for virtualization technology. If people think that the game is over, they are wrong. There are still many organizations in the process of adopting virtualization and there is still a lot of opportunity
The second contributing factor is the unrelenting pressure on organizations to control costs. Customers are looking to reduce their overhead while maintaining the enterprise quality of their infrastructure software. This is very clear. From the start, people looked at virtualization as a way to help them to eliminate underutilized resources and consolidate and share services better, thus lowering their cost. But what they don’t want to do is end up paying a lot for virtualization software as well. We are hearing from customers about their concerns regarding the high cost of proprietary virtualization software and we know that this is becoming a big pain point for them. The pressure to control cost is driving organizations to virtualization in general, and then steering them in particular to KVM.
Still, the trend toward server virtualization has existed for several years, and the pressure to contain costs as well is certainly not new.
Why, then, is 2012 shaping up to be the big year for KVM in particular?
There are three main reasons.
For one thing, Windows servers have virtualized faster than Linux servers, but now KVM is embedded in all of the enterprise-proven Linux distributions including Red Hat and SUSE, as well as in Canonical Ubuntu, making it easy for Linux servers to be virtualized. According to a recent Canonical survey, KVM is now more popular than Xen as a virtualization hypervisor among survey respondents.
In addition, more and more customers are looking at dual virtualization strategies, in which they are adopting KVM in addition to VMware. Why? Cost containment. They have an installed base of VMware, but now that KVM is available and enterprise-ready, they see that for the next wave of server virtualization they can further reduce expenses. The thing that enables this to happen is the availability now of simple and effective virtualization management. For example, IBM Systems Director VMControl which added support for KVM in 2011, enables customers to gain more from infrastructure-wide virtualization. Customers can reduce the total cost of ownership of their virtualized environments – servers, storage and networks – by decreasing management costs and increasing asset utilization. VMControl enables the management of virtual environments across multiple virtualization technologies and hardware platforms, from a single pane of glass, enabling users to benefit from a multi-hypervisor virtualization strategy
IBM’s new SmartCloud Foundation offerings allow organizations to install, manage, configure and automate the creation of cloud services in private, public or hybrid environments with a higher level of control than previously available in the industry. For example, since deploying IBM SmartCloud Provisioning, IBM customer Dutch Cloud, which has a mixed virtualization environment with both KVM and VMware servers, has seen its client base expand significantly while its administrative workload has been reduced. Now the IT team spends 80% of its time on client migrations and only 20% of its time on administration—a more than 70% decrease in administrative time. As a result, Dutch Cloud’s monthly recurring revenue has tripled twice in the last 6 months but their operational costs have remained flat.
KVM has now reached a level of security, enterprise-readiness, and strong third-party support that allows customers to deploy it with confidence and in addition, possesses a robust, and rapidly expanding ecosystem of supporting technology providers.
And so for all these reasons, it seems clear that 2012 will be a very big year for KVM: major Linux distributions are supporting KVM; enterprises are adopting KVM as part of dual virtualization strategies that have been further bolstered by the development of comprehensive management from a single pane of glass; and, KVM is proving it is secure, enterprise-ready, and has a strong ecosystem around it. Plus, its lower-cost provides a compelling reason for every enterprise large and small to seriously consider KVM. Just as it did for Linux, the tipping point for KVM has arrived.
Director, WW cross-IBM Linux and Open Virtualization
Virtualization+IBM 2700039S5C 1,993 Visits
I am very excited to announce to the community the first educational webcast on KVM hosted by the Open Virtualization Alliance (OVA). You do not need to be a member of the OVA, the webcast is open to the public and free for you and your clients.
The OVA is a consortium of over 225 companies that was formed to foster the adoption of open virtualization technologies. Open virtualization is important to our clients because it enables technology choice based on business and technical requirements, avoids vendor lock-in to a single platform, and offers lower cost to virtualize and manage their virtual machines. Whether it’s enterprises consolidating infrastructure, deploying private clouds or service providers offering massive public cloud infrastructures to their clients, server virtualization is critical to the design. And for that reason, enterprises and service providers are looking for open and cost-effective virtualization solutions that meet a wide range of technical and business requirements.
KVM has emerged as an impressive enterprise-grade alternative to expensive proprietary virtualization solutions. Many leading independent software vendors (ISVs) are committed to KVM and KVM-based solutions. But much more remains to be done before the market fully realizes the benefits of this technology. And this is how the OVA helps with its first educational webcast.
Free and open is not the reason you should register for the webcast though, it is the amazing expertise we will have access to:
and more from OVA members Acronis and Fujitsu.
I invite you to join me to learn more about KVM as an Enterprise-Grade solution with record-breaking performance, superior scalability, advanced security and lower cost.
Let’s show our support of the OVA and invite our colleagues, clients, and friends of affordable and open innovation to this webcast.
Virtualization+IBM 2700039S5C 2,532 Visits
The Challenge of Managing Multi-Platform Virtualization
By Alan Radding
For the past decade while virtualization has been experiencing widespread adoption it was considered an x86-VMware phenomenon. Sure there are other hypervisors, but for most organizations VMware was synonymous with virtualization. Even on the x86 platform, Microsoft Hyper-V was the also ran.
Virtualization, however, provides the foundation for cloud computing, and as cloud computing gains traction across all segments of the computing landscape virtualization increasingly is understood as a multi-platform and multi-hypervisor game. Today’s enterprise is likely to be widely heterogeneous. It will run virtualized systems on x86 platforms, Windows, Linux, Power, and System z. By the end of the year, expect to see both Windows and Linux applications running virtualized on x86, Power Systems, and the zEnterprise mainframe.
Welcome to the virtualized multi-platform, multi-hypervisor enterprise. While it brings benefits—choice, flexibility, cost savings—it also comes with challenges. The biggest of which is management complexity. Growing virtualized environments have to be tightly managed or they can easily spin out of control with phantom and rogue VMs popping up everywhere and gobbling system resources. The typical platform- and hypervisor-specific tools simply won’t do the trick. This will require tools to manage virtualization across the full range of platforms and hypervisors.
Not surprisingly, IBM, which probably has the most virtualized platforms and hypervisors of any vendor, also is the first with cross-platform, cross-hypervisor management in Systems Director’s newest version of VMControl, version 2.4. This is truly multi everything management. From a single console you control VMs running on x86 Windows, x86 Linux, and Linux on Power. And it is agnostic as far as the hypervisor goes; it can handle VMware, Hyper V, and KVM. It also integrates with Microsoft System Center Configuration Manager and VMware vCenter.
The multi-platform VMControl 2.4 dovetails nicely with another emerging virtualization trend—open virtualization. In just a few months the Open Virtualization Alliance has grown from the initial four founders (IBM, Red Hat, Intel, and HP) to over 200 members. The open source KVM hypervisor the alliance is championing handles both Linux and Windows workloads, allowing organizations to avoid yet another element of vendor lock-in. One organization already used that flexibility to avoid higher charges by running the open source hypervisor for a test and dev situation. That kind of open virtualization requires the kind of multi-platform virtualization management VMControl 2.4 delivers.
Virtualization+IBM 2700039S5C 970 Visits
Do you remember back when applications ran on machines that really were physical servers (all that “physical” stuff that kept everything in one place and slowed all your processes down)? Most folks are rapidly putting those days behind them. Server hypervisors and the virtual machines they manage have improved efficiency (no more wasted compute resources), freed up mobility, and ushered in a whole new “cloud” language.
Well, the same ideas apply to storage. As administrators catch on to these ideas, it won’t be long before we’ll be asking the question “Do you remember back when virtual machines used disks that really were physical arrays (all that “physical” stuff that kept everything in one place and slowed all your processes down)?”
To read more, check out Part I of an important new blog series from our Tivoli Storage team that explains the value of virtualizing storage resources. Also, stay tuned to Parts II and III that take this “storage hypervisor” concept further and into the cloud!
Yesterday, 7 industry leaders announced the formation of the Open Virtualization Alliance. (Click here to see the press release). IBM, BMC Software, Eucalyptus, HP, IBM, Intel, Red Hat and SUSE have created this consortium to foster the adoption of open virtualization technologies including Kernel-based Virtual Machine (KVM).
The Open Virtualization Alliance will promote examples of customer successes, encourage interoperability and accelerate the expansion of the ecosystem of third party solutions around KVM, providing businesses improved choice, performance and price for virtualization. The Open Virtualization Alliance will provide education, best practices and technical advice to help businesses understand and evaluate their virtualization options. The consortium complements the existing open source communities managing the development of the KVM hypervisor and associated management capabilities, which are rapidly driving technology innovations for customers virtualizing both Linux and Windows® applications.
So, why this focus on KVM? It’s all about choice and cost. KVM is an open source hypervisor that provides enterprise-class performance and scalability to run mission critical Windows and Linux workloads. Because it’s open source, KVM is a cost effective alternative. Because IBM and Red Hat stand behind it, it’s enterprise ready.
KVM is the most recent step in the evolution of open source x86 virtualization. Based on Linux, KVM is the most cost effective virtualization technology in the market. KVM scales and is secure, and delivers the benefits provided through the open source community, avoiding vendor “lock in”.
KVM is unique because it turns the Linux kernel into a bare-metal hypervisor using the hardware virtualization support in Intel and AMD processors. KVM runs Linux, Windows, and other types of virtual machines directly on hardware and as the x86 hardware virtualization support has improved, so has the performance of KVM.
A KVM hypervisor inherits the performance, scalability and security characteristics of Linux - which has been enterprise hardened for over 10 years and is trusted by millions of organizations in the heart of their data center to run their mission critical workloads. KVM scalability was recently demonstrated by a SPECvirt publication using a 64-core, 2 TB IBM x5 3850 server that achieved 336 actively running guests, more than twice the capacity of the nearest competitive result. KVM (RHEV) also rated a very credible # 2 in a recent Infoworld virtualization shoot-out.
Since KVM is based on Linux, its developers do not need to develop every feature from scratch, rather they benefit from relevant features in the Linux kernel. KVM naturally leverages the scheduler, memory management, power management hardware device drivers, platform support, and other features continuously being produced by the thousands of developers in the Linux community, giving KVM a significant "feature velocity" and broad source code review that other virtualization solutions cannot match. KVM has also helped to bring new features to the Linux kernel including kernel page sharing (KSM), transparent large page support, and a new user-mode device driver infrastructure.
IBM has brought into KVM its expertise and longstanding commitment to enterprise virtualization and open source:
· Development - IBM developers have provided key KVM features around performance, security, resource over-commitment, and reliability.
· Customer Solutions – IBM has large teams of engineers and architects dedicated to helping customers exploit the benefits of KVM in their enterprise. IBM’s many customer engagements have provided direct input to KVM enhancements.
· Product Support – A large and growing number of IBM products take advantage of open virtualization and KVM technology.
IBM software products that support KVM today include: IBM DB2, IBM ILOG, IBM Change and Configuration Management Database, IBM Lotus Domino Next, IBM Lotus Forms, IBM Lotus Web Content Management, IBM Maximo Asset Management, IBM Tivoli Access Manager, IBM Tivoli Asset Management for IT, IBM Tivoli Monitoring, IBM Tivoli Provisioning Manager, IBM Tivoli Service Request Manager, IBM Tivoli Storage Manager, IBM Tivoli Usage and Accounting Manager, IBM Websphere, and more ….
KVM has matured rapidly in recent years, and today offers an enterprise-class, open virtualization alternative on IBM System x®, BladeCenter® and other x86 servers. With top performance benchmarks, superior scalability and leading security capabilities, KVM offers both Linux® and Windows® customers a choice as well as lower costs.
KVM is Ready for Business.
Jean Staten Healy
Director, WW Cross-IBM Linux and Open Virtualization, IBM
Virtualization+IBM 2700039S5C 828 Visits
"Virtualization without good management is more dangerous than not using virtualization in the first place."—Gartner Group
That's a blunt quote, but its point is well taken. Management should be an integral element of any serious virtualization strategy from the start.
If it's not, the virtualized infrastructure certainly won't deliver its full potential; in fact, it might even be a losing investment. Far from virtualization transforming IT for the better, it will have transformed IT for the worse.
This is why Managing Workloads is one of four key priorities in IBM's new virtualization framework—a modular approach designed to help organizations create a tailored virtualization strategy that will work best for them.
Some organizations may already have in place a modern array of management solutions designed specifically to get the best results from virtualization. For the rest, however, this particular module isn't one they're going to be able to skip.
Virtualization represents such a sea change in the nature of the infrastructure that effective management solutions must be deployed, configured, tested and in use from Day One.
IBM Software Defined 2700052JD4 2,686 Visits
Kimchi is a spicy Korean side dish. It is also the code name for a new open source virtualization management project that offers sweet familiarity.
Kimchi is a new open source project aimed at providing an easy on-ramp for people who would like to start using KVM (Kernel-based Virtual Machine) but believe it will be too difficult. Kimchi is targeted at users who may have avoided the open source hypervisor because they don’t have experience with Linux or don’t have the ability to install a management server, or simply don’t have time to invest in Linux administration.
But unlike the spicy side dish Kimchi, the open source project Kimchi offers a taste of something sweet - a familiar user interface for virtualization management. Put simply, that is what Kimchi is all about - removing barriers to using KVM for a set of potential users.
Open Source Tool Designed to Appeal to VMware and Windows Administrators
There are certainly people in the enterprise who are Linux administrators and are perfectly comfortable with the way KVM is today. They regularly work with Linux admin tools and KVM fits right in to their day-to-day practice.
But there are also VMware administrators and Windows administrators who are not familiar with Linux admin practices and are not comfortable with the KVM tools. These people in particular will benefit from Kimchi, since the user interface is similar to that of VMware and Windows tools, thus helping to ease the transition to KVM.
Kimchi’s Role in the KVM Ecosystem
If you have one Linux server, then installing Kimchi on that server is quick and easy. Kimchi puts a thin layer over what is already there with KVM and Linux. You don’t need to install a separate management server. All you have to do point your browser to the KVM host and with just a couple of clicks, you can install your first guest and start running it.
While it does not come as part of KVM yet, it is hoped that Kimchi will be mature enough to be packaged up with some of the community Linux distributions in 2014, and then be included in some enterprise Linux distributions after that. The beauty of the Kimchi interface is that it boils management features down to their essence, simplifying everything, without a requirement that users have any Linux skills. And, it is rendered using HTML5 so there is total independence of both device and operating system, meaning that you can use Kimchi from a Windows or Linux work station, or a tablet or a phone.
Kimchi Reaches a Functional Milestone
Because it is a simple point-to-point management tool, it is not able to provide clustering or resource pooling. Users are limited to managing a few hundred virtual machines at a time, one host at a time.
Kimchi reached a functional milestone in October 2013 with the release of Version 1. Although it is still early in the development process for the project, it is now at the point where we think it has enough functionality for people to try it. The clear advantage is that users don’t need to maintain any management infrastructure - and they can get started using KVM right away.
IBM’s Commitment to Kimchi
IBM supports Kimchi because it represents another way to promote KVM adoption and remove barriers to open source virtualization, which IBM believes is a smart choice. Kimchi is a sound, multi-platform management tool. We, at IBM, are also using it to manage KVM on Power. It will come bundled with KVM on Power, available later in 2014.
Future Development Plans for Kimchi
At this point, the focus for Kimchi going forward is on community building and additional feature development. The input from the community will determine the future direction for Kimchi, which is an Apache-licensed project hosted on GitHub, and incubated by oVirt.org.
If you would like to learn more about Kimchi and get involved, go here.
IBM Distinguished Engineer and Chief Virtualization Architect Open Systems Development