KVM is well known as a Hypervisor for Linux, and its
integration with the Linux kernel and inclusion in the Linux build tree makes
KVM a natural choice for Linux.
But why would anyone use KVM as a Hypervisor for Windows ? Isn't that counter-intuitive, and wouldn't Hyper-V or VMware be a more natural choice ?
Think again. IBM uses KVM as the Hypervisor for its "IBM SmartCloud Enterprise" public cloud - for both Linux and Windows instances. And you might want to do the same. To understand why, we need to dig deeper on what a hypervisor needs to do - and how hypervisors relate to operating system kernels.
Fundamentally, a hypervisor needs to create and run virtual machines, allocate and manage memory, protect different virtual machines from trampling over each other, share the processor(s) between different virtual machines, and interface to the hardware devices. Yes, of course there's a lot more complexity in doing this, and especially in doing so efficiently, but the hypervisor is in effect a mini operating system - without the complexity of the graphical user interface, command line utilities and so on.
What KVM does is it plugs into an existing operating system - Linux - and turns it into a standalone hypervisor. Yes, a hypervisor which runs on the bare metal and uses the hardware virtualization support included in recent x86 processors.
KVM then uses the existing capabilities of Linux to allocate memory, provide security, and schedule the processor(s) to give the right amount of time to each virtual machine. It doesn't need to reinvent the wheel - Linux already provides these functions, and does so very well.
And then you can take this combination of KVM and Linux, remove the operating system code you don't need, and you end up with an optimized, efficient, standalone hypervisor. Red Hat have done just this to produce RHEV-H, or Red Hat Enterprise Virtualization - Hypervisor.
Now you've got your standalone hypervisor - why would you use it for virtualizing Windows ? Here's three reasons for starters
* KVM offers a lightweight, high performance, low cost hypervisor
* KVM can support both Linux and Windows guests, so providing a common hypervisor for mixed environments
* KVM can use the advanced security included in Linux through SELinux to provide mandatory access control protection between virtual machines
So next time you think about KVM, don't think about it as just a hypervisor for Linux. Think of KVM as a standalone hypervisor for both Linux and Windows.
Program Director - Cross-IBM Linux and Open Virtualization Strategy
Blog Authors: IBM Software Defined 2700052JD4 Virtualization+IBM 2700039S5C Nitin_Gaur 12000056JB Jean Staten Healy 2700025BBU John_Foley 0600026N82 SamVanAlstyne 110000DM6B alicia_wood 270003DW0M Virtualization combined with Integrated Service Management helps you use your resources effectively, manage your infrastructures efficiently and gain the flexibility to meet ever changing business demands. This blog is for the open exchange of ideas relating to virtualization across the entire infrastructure. Articles written by IBM's virtualization experts serve as conversation starters. Topics can range from latest technologies for server consolidation and tools for simplified systems management and monitoring to automating IT systems to respond to changing business conditions and cloud-based solutions for the "virtual" enterprise.
Virtualization+IBM 2700039S5C 26,511 Views
KVM is well known as a Hypervisor for Linux, and its
integration with the Linux kernel and inclusion in the Linux build tree makes
KVM a natural choice for Linux.
Virtualization+IBM 2700039S5C 3,186 Views
(Originally posted on IBM Smarter Computing tumblr blog)
As we enter an era of Smarter Computing, IT organizations are facing exploding demand. Data is more than doubling every two years and new services with greater quality are requested. All this, on budgets that on average grow less than one percent per year.
As IT organizations learn how to do more with less, virtualizing servers, storage and networks can help them achieve a simpler, more scalable and cost-efficient IT infrastructure. Proper management of the virtualized infrastructure also improves the speed of deployment of new services. The road to improved business agility has four distinct stages that range from securing IT efficiency in the consolidation stage, to gaining business effectiveness in the optimize stage.
Companies frequently start by virtualizing servers. This can deliver immediate benefits from lower capital expense and reduced energy costs: For example, Edith Cowan University in Australia consolidated a large, distributed, older infrastructure of systems and storage into an end to end, cost effective solution using virtualization on IBM System x. They reduced their physical server count from 600 to 100, achieved significant savings in power and cooling, and freed up administrator time for higher value projects.
Further benefits are available by using IBM Systems Director to manage physical and virtual resources across the entire IBM Systems portfolio (System x, Power, System z, storage, networking) and across multiple virtualization environments (KVM, VMware, etc.). Companies who have implemented Systems Director achieve important savings such as reducing server management costs by up to 34 percent. And using additional tools from IBM Tivoli, IT administrators can deploy new workloads and services more rapidly across IBM and non-IBM environments.
The virtualization journey offers a solid foundation for cloud computing. Clients like China Telecom Jiangxi (.pdf) rely on IBM’s virtualization solutions and expertise to achieve the flexibility and economic benefits of Smarter Computing. Using IBM Power servers, IBM PowerVM and IBM Systems Director VMControl, China Telecom Jiangxi created cloud landscapes and managed pools of virtual systems. They used the IBM SAN Volume Controller (SVC) to virtualize and manage storage. With this IBM solution, they reduced time to market for new offerings from months to days, improved utilization, cut hardware costs by over 50 percent, and reduced power requirements and CO2 emissions.
IBM also provides clients choice, by supporting open source virtualization technologies such as KVM that are cost effective, and offer enterprise-class performance and scalability. In May, IBM helped found the Open Virtualization Alliance, an industry consortium focused on driving market adoption of KVM and fostering an ecosystem of KVM based solutions. Since then, more than 170 members have joined, many of them virtualization, datacenter and Cloud solution providers. This fast pace of enrollment illustrates the excitement we see in the industry around KVM, and the customer demand for an open alternative in virtualization.
IBM’s virtualization solutions are a critical factor of Smarter Computing and the foundation for cloud computing, helping to improve business agility and staff productivity. IBM consistently demonstrates the economic benefits of virtualization on our range of server and storage platforms, and with that addresses the biggest challenges that CIO’s and IT architects face today.
Virtualization+IBM 2700039S5C 3,748 Views
The IBM Research Compute Cloud (RC2) is a private cloud for internal IBM use that currently hosts over 2000 running VM's. Over a year ago, we changed RC2 to primarily utilize KVM for its virtualization. We had to convert most of our existing RHEL base images and user-images that were used in Xen VM's into a KVM-compatible format. We were able to automate that conversion reliably off-line using "chroot" and loop-mount based techniques to install non-Xen kernels and update the grub configuration inside the images. Our switch to KVM enabled us to support a much wider range of Linux distributions because the native IO and virtual IO emulation built into KVM just worked with Linux distributions without complications. The upcoming version 3 of RC2 is still using KVM, but is using a beta version of the IBM Tivoli Virtual Deployment Manager as the back end deployment engine instead of Tivoli Provisioning Manager workflows. Both of these deployment engines still leverage libvirt as a way to manage the definition and life-cycle of the KVM-based VM's.
IBM Software Defined 2700052JD4 5,763 Views
This summer IBM shared plans to extend support for Kernel-based Virtual Machine (KVM) technology to the Power Systems portfolio of server products. On the surface, the announcement sounds simple enough. But like many of IBM’s initiatives, there is a substantial behind-the-scenes effort going on in an open source community to enable this innovation. In this case, much of the work is being done in the QEMU community.
What is the significance of QEMU?
QEMU stands for Quick EMUlator. It is maintained by an open source community. As the name implies, it started out as an emulator. It includes a virtual machine environment for several architectures – x86, Power, System 390, among others. However, KVM doesn’t use the processor emulator part for KVM – it just uses the virtual machine environment.
Although QEMU does not get as much attention as KVM, the technology is critical to the open source virtualization that KVM enables. The QEMU project is strategic to KVM. You can’t have a hypervisor without a virtual machine environment within which to run the operating system.
A hypervisor is comprised of the virtual machine monitor, which enforces isolation among running workloads, and the virtual machine environment, which provides the virtual hardware. For the KVM hypervisor the KVM kernel module provides the virtual machine monitor, while QEMU provides the virtual machine environment These are two open source projects that are combined to create the full hypervisor.
Some of the reasons the QEMU project is important to IBM are the same as for KVM. It is an open source project with a strong community. It moves quickly to implement new features, enabling us to bring innovation to IBM customers. In fact, our recent work on KVM for Power last year put us into a tie with Red Hat for contributions to the QEMU community.
What does QEMU enable?
Most of the new features that people are seeking in the KVM hypervisor are actually provided by QEMU components. Think about this:
In fact, most of the KVM tuning and ease-of-use features that are scheduled for release over the next year have also been developed within QEMU. In addition, most of the features that are being developed to make KVM more scalable and faster have also involved both a QEMU component and a KVM component.
IBM support for QEMU
When you implement a new feature in KVM, frequently it is necessary to implement a counterpart in QEMU to take advantage of that new KVM capability. As a result, there tends to be a large overlap in the developers that are working in KVM and QEMU.
IBM is committed to supporting QEMU development, and is investing many developer hours into the project. Over the years, IBM has participated in many open source projects, including Open Stack, , Apache and Eclipse in addition to Linux. We are approaching the QEMU project with the same intensity.
Mike Day - IBM Distinguished Engineer and Chief Virtualization Architect, Open Systems
IBM Software Defined 2700052JD4 7,079 Views
A behind-the-scenes peek at the most-attended sporting event in the world
When the U.S. Open kicks off on August 26 it will draw millions of tennis enthusiasts from all over the world for two intense weeks of non-stop world-class tennis action. Fans will watch events unfold not only at the USTA Billie Jean King National Tennis Center in Flushing Meadows, New York, but also through an integrated online, mobile and social experience delivering real-time play-by-play action, live video streaming and live match scores and statistics, ensuring that every fan experiences the thrill and excitement on center court. I am proud to say that for more than 20 years, the US Open is relying on IBM as the end-to-end IT provider to enable and deliver this interactive experience though a myriad of fan-facing technologies.
Understanding that it is not possible for all tennis fans to make it to New York for the matches, it is a priority for the USTA (United States Tennis Association), the governing board for tennis in the United States, to provide content and information to them any way they want at any time of day or night. To support the USTA’s goal, IBM delivers the US Open experience to fans through the digital platform, and maintains uninterrupted availability of service throughout the event.
The capabilities provided to US Open tennis fans continue to expand. To name a few, there is the popular SlamTracker application that provides real-time scoring to fans for all matches. IBM’s “Keys to the Match” analysis is built into the SlamTracker application. The Keys’ are generated by using IBM predictive analytics software to analyze over 41 million data points from the past eight years of Grand Slam data for all men’s and women’s matches. This feature helps fans understand the important things a player must do to increase the likelihood of winning a match. And, mobile support has been expanded to include iPads as well as iPhone and Android devices. Fans that are physically at an event or watching the US Open on television at home often want access to digital information, to join the conversation on social media and to achieve a greater sense of control. The US Open “second screen” experience enables more fan interaction.
Social Media Insights Enhance Fan Experience and System Availability
IBM’s analysis at the Open has expanded to encompass social media. This helps to determine the most popular players, and aids IBM in ascertaining - as play is in progress - the matches that are likely to have the greatest fan traffic.
Behind the scenes, we are using IBM analytics to predict, allocate and monitor capacity in the Cloud. By analyzing tournament, player and social data, the system continuously predicts traffic expected to the Web site and automatically allocates or deallocates the appropriate resources. Applying predictive analytics in the cloud enhances the online user experience. Since we can optimize projections in order to add or reduce capacity, everyone can have an optimal experience. It also helps save dollars since allocating capacity only when its needed means servers aren’t sitting idle when they are not needed. All of our systems are able to generate highly accurate forecasts of how much traffic to expect, but we also look at our log history and social media discussions to figure out if there is spike in interest around a certain match that may translate into a rapid need for additional capacity.
A great example of the insight social media can provide is the Australian Open in 2012 when a match between Novak Djokovic and Rafael Nadal went on for nearly six hours. As it continued, it became clear that this was one of the longest finals matches in Grand Slam history. Once that happened, people started tweeting about it, and social media discussions sprang up. This drove additional traffic to the website because people wanted to witness it themselves.
Elastic Capacity Enabled by IBM and Open Source Technologies
The elastic capacity of the IT infrastructure supporting the US Open is made possible by the IBM SmartCloud technologies. This enables fast creation and dynamic allocation of resources transparently to users, while also supporting real time access. The US Open cloud environment includes virtualized IBM servers, software and storage across the globe, and supports the continuous availability and scalability required. The capabilities provided by IBM Monitoring optimize workloads and provide visibility necessary to allocate resources and more intelligently plan future growth.
Like most big shops, we are not homogeneous. We rely on our own IBM technologies and open source. Real-time and historical data analytics is enabled by IBM Smarter Analytics which is a combination of products including IBM InfoSphere BigInsights built on top of Apache Hadoop and IBM InfoSphere Streams. We also use a variety of servers, including both IBM Power Systems and IBM System x, for our cloud. And, we use a mix of operating systems. We rely on Red Hat Enterprise Linux, SUSE Linux Enterprise Server and AIX. We have different capabilities that we must support and which require different operating systems. Cloud enables us to do that very easily. We also use the KVM (Kernel-based Virtual Machine) hypervisor to manage our virtual machines on System x – and IBM has just announced that KVM on Power will soon become available.
The reality is that each platform has its own attributes and that is why we include them in the mix. For example, Power‘s Logical Partitioning (LPAR) divides a server’s resources into virtual “logical” partitions, and we continually take advantage of the LPAR mobility aspect of Power Systems. Power allows us to migrate live workloads from one physical frame to another without any impact. If we have a failure on one of our machines, we can do what we call “frame evacuation,” and move all the running servers including the databases to another machine, then make a repair, and move them back. You can do this on the fly in the middle of the day, in the middle of a peak match, without any impact to the business and, for us, that is critical.
The good news is that when the US Open happens in Flushing Meadows starting this week, tennis fans will have the highest quality experience possible, whether they are at the Tennis Center in person, or monitoring the matches online. Even better news is that all of these technologies can be applied to a wide range of use cases across many industries and are available today.
You can learn more at: http://www.ibm.com/usopen
Software Engineer and Master Inventor, IBM
IBM Software Defined 2700052JD4 2,607 Views
Last week, IBM was the premier sponsor of the Red Hat Summit in Boston for the ninth year in a row. This conference is a highlight for me each year because it gives both companies the opportunity to showcase the joint solutions we deliver to our clients, hear what mutual customers have to say first-hand, and provide a peak at what will be coming in the year ahead. There is always a lot of energy at the Red Hat Summit spurred by thought-provoking presentations and the unveiling of major innovations. This year was no exception.
Kicking off IBM’s participation in the Red Hat Summit, Arvind Krishna, GM Development and Manufacturing, IBM STG, delivered a keynote in which he announced new IBM initiatives to further support and speed up the adoption of the Linux operating system across the enterprise. Arvind told attendees that IBM is opening two new Power Systems Linux Centers in Austin, Texas, and New York in addition to the Power Systems Linux Center launched in Beijing in May. Arvind also spoke about IBM’s plans to extend support for Kernel-based Virtual Machine (KVM) technology to the Power Systems portfolio of server products – giving IBM Power customers an open choice. The new centers will make it easier for developers to build new applications for big data, cloud, mobile and social business computing using Linux – and in the future, KVM – with the latest IBM POWER 7+ processor technology. Signifying the importance of these announcements, the news was covered widely in the news media, including Forbes' DividendChannel, ZDNet, eWeek, Linux and Life, Computer Business Review, and The Register.
At the Summit, Red Hat, announced the global availability of Red Hat Enterprise Virtualization 3.2, which builds on the industry-leading performance of the KVM hypervisor to offer an enterprise-class data center virtualization and management solution, with fully supported Storage Live Migration and a new third-party plug-in framework. Red Hat also announced that IBM is joining the Red Hat OpenStack Cloud Infrastructure Partner Network, the availability of the new Red Hat OpenStack Certification, and the launch of the Red Hat Certified Solution Marketplace. The Red Hat Certified Solution Marketplace already includes more than 500 products that have been certified as OpenStack compute (Nova) compatible, from technology leaders – including IBM. IBM’s collaboration with Red Hat and the OpenStack ecosystem is in line with our commitment to give clients the flexibility, cost-effectiveness, and security that is necessary for cloud computing – both now and in the future.
It was clear at the Summit that cloud is on our customers’ roadmaps. Both IBM and Red Hat understand the importance of the cloud and the critical role that Linux and KVM play in the cloud. Whether it is private, public, or hybrid, we know customers have to virtualize to get there – and both IBM and Red Hat are committed to KVM as the virtualization hypervisor.
There were many other high points at this year’s conference as well. In our booth, IBM profiled technology from IBM PureSystems, IBM System x, IBM BladeCenter, IBM Power Systems, and IBM System z, and demonstrated the latest IBM solutions for cloud computing, open virtualization with KVM, and big data. I also had the opportunity to moderate a panel discussion in which representatives from IBM, Red Hat, and the University of Connecticut participated. The discussion focused on common Red Hat Enterprise Virtualization, KVM, and OpenStack use cases and the business benefits that are being realized. I was pleased to see a packed room with the audience asking many more technical questions about KVM than in prior years.
As I left the conference this year, I was struck by the thought that something was very different. Whether customers are discussing the use of KVM in the cloud, or adding it as a second hypervisor for “hyperdiversity,” the debate about whether KVM is technically ready is now over. It has achieved impressive SPECvirt and TPC-C benchmarks, security certifications, and according to IDC, is showing impressive growth in unit shipments. We are no longer explaining what KVM is. Instead, this year, we were able to show a robust portfolio of clients that have realized success with KVM. The conversation around KVM has changed.
Jean Staten Healy - Director, Worldwide Linux and Open Virtualization, IBM
IBM Software Defined 2700052JD4 6,824 Views
Just a few years ago, many enterprise customers predicted they would never use cloud computing because it was too risky. Fast forward, and today the picture is a stark contrast. Compelling economic advantages have trumped all other concerns. Worldwide revenue from public IT cloud services, which exceeded $21.5 billion in 2010, will skyrocket to $72.9 billion in 2015, representing a compound annual growth rate of 27.6% - four times the projected growth for the worldwide IT market as a whole, according to IDC cloud research.
Once that initial leap to the cloud has been made, what else do organizations look for? It is clear that they want a choice of hypervisor technologies for their cloud deployments – including open source options such as KVM (Kernel-based Virtual Machine). According to a recent IDC white paper, “KVM: Open Virtualization Becomes Enterprise Grade,” cloud providers are embracing KVM. Many prominent public clouds are built on KVM, including the Google Compute Engine, HP Cloud, and IBM SmartCloud Enterprise. KVM has also become the unofficial reference standard for OpenStack, and is the choice of over 95% of OpenStack clouds, the IDC paper reports.
Beyond service providers, organizations that are deploying private clouds are also more amenable now to using a new hypervisor. This is the result of hypervisor technology being increasingly viewed as offering a range of enterprise-grade alternatives. The IDC white paper points out that, when asked in a survey which hypervisor they would prefer to use with their private cloud system, more than half of respondents said they would like to use a new hypervisor rather than the existing one. In addition, IDC says that when choosing the second hypervisor, companies are equally likely to choose an open source solution as a proprietary one, a result of maturation of open source technologies.
Why do organizations choose KVM for the cloud?
Cost – For anyone deploying cloud services, but particularly for cloud service providers which are competing for business, the ability to provide a high level of service while keeping infrastructure costs down is critical. For example, DutchCloud, a cloud service provider, has found that using IBM SmartCloud Provisioning enables it to bring in customer environments on VMware and reduce costs by moving them to KVM. Not only is KVM affordable, but for organizations that are already using Linux servers, KVM is already included in the main enterprise Linux distributions.
Flexible tooling – Since there is no single management infrastructure that must be used, KVM enables choice in terms of cloud and virtualization management. Companies can build their own toolset, or they can use a variety of products, including OpenStack, as well as IBM products such as SmartCloud Provisioning and SmartCloud Orchestrator which support KVM. Solutions that support multiple hypervisors enable KVM to easily be added to the mix to take advantage of its lower costs.
Scalability and fast provisioning – KVM can pack virtual machines very densely on a host, as demonstrated in a recent SPECvirt benchmark, resulting in great efficiency. KVM also uses thin provisioning, which means that the guest image file is compressed, so only a portion of the file is transferred over the network to the host machine. This enables organizations to start up the guest quickly, an important consideration for cloud deployment.
Security – KVM benefits from SELinux, which enables it to provide Mandatory Access Control and enforced isolation of virtual machines. Proving the high level of security provided by SELinux and KVM and setting the stage for broader enterprise adoption, Red Hat and SUSE enterprise Linux distributions with KVM have achieved Common Criteria Certificates at EAL 4+.
Today, because of these compelling advantages, many of our clients are choosing KVM, both for public clouds and private clouds.
Jean Staten Healy
Director, Worldwide Linux and Open Virtualization, IBM
Virtualization+IBM 2700039S5C 3,186 Views
IBM SmartCloud Provisioning is an workload optimized cloud which combines infrastructure and platform capabilities that allows quick cloud deployment – and support for KVM and multiple hypervisors helps keep costs under control.
Requirements for public and private cloud provisioning have similarities, but there are also key differences. All cloud providers, whether private or public, are concerned with availability and security. But public cloud providers have the added requirement to remain flexible to meet a wide range of customer deployment needs, while at the same time, keeping a firm grip on costs both to remain competitive as well as to ensure their own profitability. IBM SmartCloud Provisioning which was designed specifically in terms of infrastructure-as-a-service can play a role in all of those areas and provide additional capabilities with rapid composite application deployments.
Rather than requiring service providers to build a cloud from scratch using virtualization management tools, IBM SmartCloud Provisioning offers a high-scale, low-touch cloud provisioning system. It is a hypervisor-agnostic, infrastructure-as-a-service solution enabling fast, automated cloud provisioning, parallel scalability, integrated fault tolerance and a foundation for more advanced cloud capabilities. In addition, the private cloud environment offers near-zero downtime and automated recovery from hardware and software failures across heterogeneous platforms.
Support for Open Standards and Hypervisor-Agnostic
While IBM SmartCloud Provisioning was originally built on top of KVM (Kernel-based Virtualization Machine), support has been expanded to include VMWare ESXi, vCenter, PowerVM, HyperV and Xen as well. Support for multiple hypervisors is where we think the industry is going, and the benefit of KVM support in the mix is revealed when you look at the needs of the cloud providers.
Cloud providers are on very tight budgets and they will succeed in terms of selling their services only if they are able to provide IT services to customers at a lower cost than the customers could provide for themselves, so it is very cost-competitive. In order for the cloud service providers to make a profit, the cost of the underlying infrastructure is really important. And then, to retain cloud customers, the reliability, speed and scalability are also very important.
For example, Dutch Cloud is a leading ISP based in the Netherlands, focused on SME customers in a few key industries including healthcare and electronics. It provides a range of cloud based services – from fully managed IaaS through to disaster recovery solutions. Customers select DutchCloud for the quality of service delivered and its service assurance.
Dutch Cloud wanted to improve the delivery of its cloud services in terms of cost, speed, and agility, and minimize administration, as well as scale delivery costs to business volumes. Since implementing SmartCloud Provisioning, Dutch Cloud has been able to deploy new services in seconds rather than hours, and has even deployed hundreds of new VM instances in minutes. Adding the cost efficiency, Dutch Cloud has also been able to move a number of its customers from proprietary hypervisors to the more affordable KVM.
Because SmartCloud Provisioning is hypervisor-agnostic, you can match it with a range of hypervisors including VMWare ESXi, vCenter, PowerVM, HyperV and Xen. There are obviously going to be times when a client indicates a preference for a particular hypervisor. But when there is no specific preference and service is all that matters, then from the cloud provider’s point of view the decision plays out this way: If you have got equivalent capabilities in terms of hypervisor, and equivalents in terms of the virtualization management – because IBM SmartCloud Provisioning is available across a range of virtualization technologies – then it comes down to cost, and KVM wins there hands-down. SmartCloud Provisioning’s multi-hypervisor support enables the provider to offer a range of virtualization options without locking the customer in, and because KVM is lower cost than proprietary alternative, it opens up a level of affordability that would not be possible otherwise.
And in terms of security, for public sector customers in particular, KVM’s Common Criteria Certification at Evaluation Assurance Level 4+ (EAL4+) is significant. It means that, like other hypervisors, the KVM hypervisor on Red Hat Enterprise Linux and IBM x86 servers now meets government security standards allowing open virtualization to be used in homeland security projects, command-and-control operations, and throughout government agencies that previously were limited to proprietary virtualization technologies.
KVM also goes beyond competitors in terms of security with SELinux or Security-Enhanced Linux which provides much greater protection and isolation between virtual machines, and enables mandatory access control as opposed to just discretionary access control. With discretionary access control, permissions are based on a user’s role, whereas with mandatory access control a user has to be specifically authorized in order to access a particular resource. This means that if a virtual machine goes wrong and attempts to impersonate someone with a high role, then it can get around discretionary access control. But with mandatory access control, if a virtual machine goes wrong, it doesn’t matter because it still does not have the permission – so it is very important in terms of military-grade security which is why SELinux was actually developed by the National Security Agency. And, because KVM is based on top of Linux it is then able to use that for the virtual machines.
The Bottom Line for Cloud Providers
There are several things that cloud service providers have to consider in terms of provisioning a cloud. The first is the cost of software, second is the level of virtual machine density that can be achieved – in other words, on a particular piece of hardware how many virtual machines can go on that piece of hardware and still maintain a good quality of service because obviously the more virtual machines on the hardware, the lower the unit costs. And then, it is about what is the overall quality of service that is provided in terms of reliability, and performance, and finally, it is about management. What IBM SmartCloud Provisioning is all about is this: How do you provision high numbers of clouds very quickly with lots of instances of virtual machines on clouds with minimal need for administration – achieving maximum automation, maximum self-healing, and maximum detection of failures and recovery from failures.
Test drive a full version of IBM SmartCloud Provisioning with the 60-day no charge trial
Jean Staten Healy
Director, Worldwide Linus and Open Virtualization, IBM
Virtualization+IBM 2700039S5C Tags:  hypervisor ibm adam systemsdirector vmcontrol gabriel kvm hyperversity linux jollans consulting 7,729 Views
Last month I blogged about the surprising level of hypervisor diversity that we’re seeing in use by customers – as shown by a report published by Gabriel Consulting and based on a survey of hundreds of IT professionals.
Now I want to discuss what’s behind this – why are so many customers mixing x86 hypervisors, and what are their reasons?In essence, it comes down to three factors – lower cost, technical differences, and customer ability to manage multiple hypervisors.
You can read more about what’s driving hypervisor diversity in the second new Gabriel Consulting Report ‘Hyperversity’ Driven by Technical & Cost Differences.
Cost differences matter
The first and most obvious factor is cost. We’re seeing the familiar cycle of high-priced proprietary technologies being challenged by lower-cost open source innovation – the same situation that played out with Linux, Eclipse and Apache.
Although the Gabriel Consulting report shows that customers value proprietary hypervisor technology, it also shows that the costs of implementing this everywhere can often be too high and half of the respondents agreed with the statement “Cost issues make standardizing on one suite too expensive…”
We’re also hearing this from our customers, from large banks to cloud providers. Cost is one of the main reasons they’re adopting KVM.
But according to the report, while cost is a driver for hyperversity, it isn’t the major driver. Intriguingly, that’s technical factors.
Technical differences matter even more
71% of respondents agreed with the statement “Technical differences between various solutions drive hypervisor diversity”.
The first and most obvious factor behind this is affinity between the hypervisor and the operating system. This is clearly a major factor for KVM and Linux, as well as for Hyper-V and Windows. Hypervisors and operating systems need to perform many of the same tasks – starting processes, managing memory, accessing devices. If the Linux comes with the hypervisor already included, and integrated, and tested, then that’s a strong reason for adopting KVMr.
The second factor is that hypervisors such as KVM which are based on an existing operating system don’t need to reinvent the wheel and can exploit the scalability, security and device support that’s already there. This is one of the reasons why KVM holds the top seven SpecVirt performance benchmarks – it’s leveraging Linux which already has the scalability.
The final factor is how suitable the hypervisor is to supporting cloud computing. The Gabriel Computing report saw a data correlation between KVM and private cloud projects, and speculated on whether there is something about KVM that makes it more amenable to driving private clouds.
We think that the scalability and high VM density provided by KVM, along with its open approach and low cost, makes it a great choice for cloud computing. This is why IBM uses KVM as the hypervisor for both its public cloud, IBM SmartCloud Enterprise, and also its largest internal private cloud, the IBM Research Compute Cloud.
Managing multiple hypervisors
Of course, the adoption of multiple hypervisors, like the prevalence of multiple operating systems, means that customers have to be able to manage the hypervisor diversity successfully.
In the early days of adoption, IT shops are likely to use the virtualization management tools most closely connected with the hypervisors – VMware’s vCenter, Microsoft’s Systems Center, and Red Hat’s Enterprise Virtualization – Management.
As the hypervisor diversity trend continues, this means having multiple management tools and multiple skill sets.
The idea of managing a mixed hypervisor environment from a single pane of glass then becomes increasingly attractive – whether from ISVs in the Open Virtualization Alliance such as Zenoss and ManageIQ, or enterprise systems management vendors such as IBM with Tivoli and IBM Systems Director VMControl.
Whatever happens, it looks like hypervisor diversity is here to stay for at least the next few years – and that promises to make for interesting times.
Program Director, Linux and Open Virtualization Strategy, IBM
Virtualization+IBM 2700039S5C Tags:  ibm vmcontrol hypervisor linux kvm virtualization 5,739 Views
Generally, when we think about new technology we tend to focus on all the advantages it adds. And, in the case of server virtualization - a technology that has been strongly embraced over the past decade as it expanded beyond the mainframe into the realm of x86 servers - the advantages are many. Virtualization is being widely embraced in the enterprise because it enables greater utilization of an existing infrastructure, flexibility in terms of reallocating resources when they are needed and where, and not incidentally, significant cost savings due to a smaller physical footprint, energy efficiency and the ability to avoid or postpone new hardware purchases.
Those are some pretty powerful advantages – no argument there. But what about the complexity that is with the need to manage physical and virtualized servers, and the increasing need to manage more than one hypervisor? That’s a compelling issue, as well – and this is where IBM Systems Director with VMControl comes in.
WHAT IS SYSTEMS DIRECTOR?
The base level of capability we call VM lifecycle management includes the ability to create or delete the virtual machines to configure it to start and stop, pause or relocate between servers, as well as all of the basic operations that get done every day at a customer site. And we have that level of support for the broadest number of hypervisors. On System x, we include that level of support for VMware ESXi as well as for KVM (Kernel-based Virtual Machine), and for Microsoft Hyper-V. We also have that level of support for PowerVM on the Power platform and z/VM on the mainframe.
Beyond this base level, IBM also offers higher level editions of VMControl that add functionality such as image management and system pools, which is the ability to combine multiple virtual machines across multiple servers and manage them as though they were a single physical entity. That advanced support is now available for PowerVM on Power Systems and for KVM on System x, and this level of advanced support for additional hypervisors is on our product roadmap.
WHY IS IT IMPORTANT TO SUPPORT A RANGE OF HYPERVISORS?
In the past, many customers would purchase both the hypervisor and the virtualization management from vendors such as VMware, but now with the choice of hypervisors, and the advances that have been made by Windows with the Hyper-V hypervisor and with Linux distributions such as Red Hat with KVM, customers are getting very good hypervisors and virtualization solutions at no extra cost “in the box” with the operating system. Since it is something that they have to pay for anyway, many customers are thinking: Why pay this additional “tax” for third-party virtualization when I am getting “good enough” hypervisor technology bundled with the operating system?
With Windows DataCenter Edition clients get Hyper-V and can have an unlimited numbers of Windows guests and with the equivalent version of Red Hat Enterprise Linux they get KVM and can have unlimited Linux guests for no additional cost. As a result, they are not removing VMware, but as they deploy new servers they are choosing not to put VMware on everything. For systems that are targeted primarily for Linux workloads, clients often choose Red Hat Enterprise Linux since they get KVM for no additional cost, and with IBM Systems Director VMControl, we provide a way to manage the KVM hypervisor that comes with Red Hat Enterprise Linux 6.2.
MANAGING PHYSICAL AND VIRTUAL RESOURCES THROUGH ONE PANE OF GLASS
The transition to cloud computing blurs the lines between administrators and users, with workload provisioning being delegated to end users and consumption of IT resources shifting to a ‘pay as you go’ model. Likewise, administrators are having to broaden their skill sets beyond a single type of resource (such as servers, networks or storage) and become multi-skilled in order to support cloud infrastructures requiring pooled resources. IBM Systems Director is rapidly evolving to support the increasingly sophisticated demands of this next generation of administrator.
Virtualization+IBM 2700039S5C Tags:  kvm myths x86 ibm linux virtualization hypervisor 32,449 Views
KVM (Kernel-based Virtual Machine) is gaining traction in the enterprise as a virtualization solution that provides high performance, scalability, and cost efficiency. But misconceptions still abound about this open source hypervisor. Some falsehoods continue to be perpetuated by organizations offering competing products, and others because KVM is maturing quickly and the up-to-date, correct information is not yet widely known. Here, we tackle some of the most persistent myths about KVM - because it’s time to set the record straight.
Myth #1: KVM is type 2 hypervisor that is hosted by the operating system, and isn’t a bare metal hypervisor.
On x86 hardware, KVM relies on the hardware virtualization instructions that have been in these processors for seven years. Using these instructions the hypervisor and each of its guest virtual machines run directly on the bare metal, and most of the resource translations are performed by the hardware. This fits the traditional definition of a “Type 1,” or bare metal hypervisor.
You can also get KVM packaged as a standalone hypervisor - just like VMware ESX is packaged - but initially KVM was not available in that package. One way of doing this is with Red Hat Enterprise Virtualization (RHEV).
Myth #2: KVM only runs Linux workloads.
Myth #3: KVM is only available for x86 platforms.
Myth #4: KVM is only available from Red Hat.
Myth #5: KVM is only available as part of enterprise Linux distributions.
Myth #6: KVM is not secure.
But the myth about security persists because of the fact that KVM is based on Linux - and that has a whole bunch of baggage with it. There are several reasons for this.
One is that some people think that open source code is not secure because people can audit the code and find security entry points and potential bugs where they can crack the code and escalate into a security issue. However, auditing source code has an overwhelming benefit to security. When more people audit code, that code becomes more secure. When you use proprietary hypervisor technology with closed source code, you never get to review that code so you have no idea what has been audited for security and what hasn’t. And, furthermore, anybody with a disassembler can disassemble the binary image and start looking at the assembly code to find security holes.
The second reason people say it is not secure is that when KVM is packaged as part of an enterprise Linux distribution, the distribution can include additional components such as an HTTP server, more than one shell, programming languages such as Perl and Python, and almost too many tools to mention. In this case, you have to take the Linux distribution - even if it is an enterprise Linux distribution - and spend some time to lock it down yourself or get something like RHEV-H which is a much smaller component and it is locked down by default.
The bottom line is that KVM is not necessarily insecure because it is based on enterprise Linux, but you might want to remove some packages that might have some issues in a Linux distribution – or simply get the RHEV-H version.
Myth #7: There are no virtualization management tools available for KVM.
For more information about KVM, access a new white paper, “KVM: The Rise of Open Enterprise-Class Virtualization,” from the Open Virtualization Alliance, an organization founded to promote awareness and adoption of KVM.
Mike Day, Distinguished Engineer and Chief Virtualization Architect, Open Systems Development, IBM
Virtualization+IBM 2700039S5C Tags:  virtualization ibm summit linux ova kvm hat red 5,435 Views
At Red Hat Summit 2012, long-time partners IBM and Red Hat showcase cost-effective, open source solutions to help “Transform Your IT”
At IBM, we know you can’t have open source without choice, and there is no choice without committed partners and a rich ecosystem.
As we look forward to Red Hat Summit 2012, and Red Hat celebrates the 10-year anniversary of Red Hat Enterprise Linux, I am so pleased that IBM will be the Premier Sponsor of the event taking place on June 26 - 29 in Boston at the Hynes Convention Center.
In the late 1990s, IBM made the decision to fully support Linux, and formed a partnership with Red Hat that has only strengthened over time. While Linux is now considered a mainstay of enterprise IT, it was the introduction of Red Hat Enterprise Linux as well as IBM’s commitment to the open source operating system in the enterprise that were major factors in Linux being taken seriously for business. Offering full support capabilities and ready to run in an enterprise environment, Red Hat Enterprise Linux, introduced 10 years ago, was the first real commercial, business-focused release. And IBM was right there with Red Hat. Basically, Red Hat runs on IBM, and IBM runs on Red Hat.
From the beginning, IBM and Red Hat shared a vision for Linux in the enterprise and we have furthered that vision by maintaining complementary interests that don’t overlap or compete with one another in major areas. By staying independent of one another, but working jointly, we have been able to make the Linux ecosystem richer and stronger. During our 12-plus years of collaboration, IBM and Red Hat have focused on a large number of Linux projects, and a large number of joint customers around the world.
The Partnership Advances
But technology and even partnerships must grow and change to remain meaningful. And so, the IBM-Red Hat story is also moving forward from the initial focus on the open source operating system to also include advancements such as open source virtualization.
Following the emergence of x86 hardware support for virtualization in the mid-2000s and, more recently, the creation of the KVM (Kernel-based Virtualization Machine) project, Red Hat and IBM started investing development resources in the KVM open source community, to accelerate the adoption of open virtualization alternatives. IBM and Red Hat are both founding partners of the Open Virtualization Alliance, which is dedicated to fostering awareness and market adoption of open virtualization and KVM.
The launch of Red Hat Enterprise Virtualization 3.0 is a testament to our companies’ significant investment in virtualization. If you virtualize with KVM and Red Hat Enterprise Virtualization, you want to do it on IBM System x. For example, as a new joint IBM-Red Hat case study shows, LetterGen, an IT services provider in Belgium, migrated from VMware to Red Hat Enterprise Virtualization, along with Red Hat Enterprise Linux as the core operating system, and as a result, was able to reduce costs by 67%, increase utilization, optimize system maintenance and improve service levels all around.
Transform Your IT
It is apt that the theme of the upcoming Summit is “Transform Your IT” because that is just what IBM and Red Hat have been helping our joint customers do - and that is why I am really looking forward to this year’s Red Hat Summit. There will be wonderful networking opportunities for our clients and business partners to get to know one another better and great presentations from thought leaders and technical experts.
One you won't want to miss is a keynote on Tuesday, June 26, at 5:30 pm in the Main Hall, on “Open Technologies for the Next Era of Computing”. This presentation will be delivered by Robert LeBlanc, Senior Vice President, IBM Software Group. As you may know already, Robert has long advocated IBM’s participation in the open source community, and, when serving as Vice President of IBM Software Strategy, was responsible for crafting the Linux strategy for IBM.
Of course, IBM will also have a booth and there will be additional sessions covering how hardware and software solutions from IBM and Red Hat provide solutions for a Smarter Planet. The entire IBM product line is enabled for Red Hat Enterprise Linux, making it easy for businesses to reduce complexity, lower costs and take advantage of the power of open standards. Technology from IBM System z, IBM PowerLinux, IBM System x and IBM BladeCenter will be profiled, demonstrating the latest IBM solutions for cloud computing, open virtualization, and systems management. In addition, IBM will be showcasing IBM PureSystems, a new family of expert integrated systems, optimized for performance, virtualized for efficiency, designed for cloud.
I will also moderate a panel discussion on Wednesday, June 27 at 10:40 am in Room #309. Titled “KVM on IBM System x – Leverage the Ecosystem!,” we’ll talk with members of the open virtualization ecosystem about why KVM and Red Hat Enterprise Virtualization work so well on IBM System x, and about the business value your organization can get out of these solutions. Participating in the discussion will be Wes Ganeko, IBM STG North American System x Sales Executive; Dmitri Joukovski, Vice President of Product Management, Acronis; Carl Trieloff, Director Open Source and Standards, Senior Consulting Engineer, Red Hat; and Dr. Gilad Zlotkin, VP Virtualization & Management Products, Radware.
I hope you will join us at the Red Hat Summit 2012 and take part in advancing the story of open source IT innovation for the future!
You can learn more about IBM’s participation at the Summit here.
The Open Virtualization Alliance will be at Red Hat Summit in booth #2432.
If you have not registered yet for the Summit, there is still time. Register here.
We’ll also be tweeting about our activities at Red Hat Summit 2012 – so follow us on Twitter at: @OpenKVM.
Jean Staten Healy
Virtualization+IBM 2700039S5C Tags:  smartcloud virtualization faq kvm linux vmcontrol dutchcloud ibm 10,611 Views
In the early days of Linux, it was often the technical people in organizations who knew about it and were already implementing, while management had little awareness of Linux. We are seeing that same trend occurring now with KVM. To help further the overall understanding and awareness of KVM and fill in the information gap, here is a list of the most frequently asked questions that we at IBM have encountered in recent panel discussions, conversations, and interviews.
FAQ 1: How does KVM fit in with cloud?
How enterprise-ready is KVM? IBM uses KVM in its public cloud - IBM SmartCloud Enterprise - and our own IBM Research Compute Cloud (RC2), a private cloud for internal IBM, also relies on KVM for its virtualization.
In addition, IBM customer Dutch Cloud is a cloud service provider in the Netherlands. Open standards are very important to Dutch Cloud, and being able to support both KVM and VMware hypervisors with IBM SmartCloud Provisioning enables them to offer choice to their clients. “KVM also saves us a lot of money, because of its lower licensing costs. We are using both KVM and VMware, so IBM SmartCloud Provisioning enables us to bring in customer environments on VMware and reduce costs by moving them to KVM. We’ve also found KVM much easier to install and manage,” explains Martijn van Zoeren, CEO, Dutch Cloud, in a customer case study.
For more information, go to Jean Staten Healy’s blog about KVM and the cloud, Why Open Virtualization is Important for Cloud.
FAQ 2: KVM currently has a small market share. Don’t the other hypervisors have too big a market share for KVM to have any chance of success?
According to analysts, only one in five physical servers are virtualized so far – meaning there is plenty of headroom because many clients are still making their virtualization decisions. In addition, customers are fearful of vendor lock-in with proprietary solutions. With the availability of new management tools - such as IBM Systems Director VMControl that covers both proprietary and open source virtualization technologies - heterogeneous virtualization is becoming a much more realistic choice.
Customers like choice and lower costs – as long as the alternative is still enterprise-grade. Linux and Eclipse succeeded through a combination of great technology, a compelling customer value proposition, and an open approach not dominated by any single vendor. KVM has all of these attributes
FAQ 3: What is the momentum? Is KVM following Linux? Is it sufficiently similar to Linux that it will follow that trajectory?
The market share gain for KVM is likely to be driven by users with affinity to Linux and open source - customers who feel that are paying too much for their current virtualization approach, and by service providers implementing cloud computing who want a cost-effective, secure, and scalable virtualization option.
In addition, KVM has a number of advantages compared to other hypervisors. KVM currently holds the top seven SpecVirt virtualization benchmark. These all use Intel processors and Red Hat virtualization, and are a mixture of HP and IBM systems. On the same 2-socket and 4-socket hardware, KVM delivers slightly better virtualization performance than VMware. But where KVM really excels is on scalability. Only KVM has published SpecVirt benchmarks for 8-socket, 80-core systems, since it can scale much better than VMware and support many more processors and larger memory, because it inherits the scalability of Linux. This translates to greater virtual machine density on large x86 systems, and therefore better resource sharing and lower costs.
KVM also inherits the Mandatory Access Control of SELinux (Security Enhanced Linux), and uses it to provide a very high level of security between virtual machines. This means that clients can be confident that their data and applications are fully protected, even in a multi-tenant cloud environment. Only KVM has this level of hypervisor security, which it inherits from Linux. In addition, KVM has recently been certified at EAL4+ level in the Common Criteria security certification, with RHEL 5 and IBM servers, and is currently in evaluation with RHEL 6. This gives government and other security-conscious clients the confidence that KVM has been tested at a top security level.
KVM also inherits quality of service features from Linux, including cGroups which enable resources such as processors and memory to be allocated to specific virtual machines, thus ensuring that high priority virtual machines get the resources they need.
Customers are looking for an open alternative to proprietary virtualization solutions, and KVM provides an enterprise-ready option.
FAQ 4: Where is KVM being adopted today and what are the typical use cases?
KVM is being used for cloud computing because cloud service providers, both public and private, are looking to minimize their costs – by increasing the density of virtual machines on each physical server and by reducing the software licensing costs of the hypervisor
KVM is also being used for Linux server consolidation. Linux servers are currently less virtualized that Windows servers and KVM is the natural choice, since it is already integrated as part of the leading enterprise Linux distributions from Red Hat, SUSE and Ubuntu
And, KVM is also being used by enterprises with heterogeneous virtualization. Large enterprises that are already using VMware and are comfortable using multiple hypervisors are now adding a new wave of virtualized servers. Adding KVM into their data center environments has become a much easier decision to make now that multiple hypervisors can be managed from a single management console.
5-How does KVM fit in with the OVA, oVirt, OpenStack, and the rest of the virtualization community?
The KVM community develops the base hypervisor, while oVirt develops the virtualization management software and also packages the hypervisor and management software together. KVM is also included into the Linux source code development tree, rather than being a separate add-on, so it is fully tested and integrated
Open Stack is a cloud infrastructure that is hypervisor-agnostic. OpenStack was originally developed on KVM, but it has now added multiple hypervisor support. OpenStack also uses the native hypervisor management tools – for example using oVirt to deploy, start and stop virtual machines. In addition, oVirt enables fine-grain control of KVM – to enable a work group/resource pool sized infrastructure. It also includes automation, but offers a lot more detail that you can monitor because virtual machines need more granularity in management.
6-Does KVM have vMotion or live migration?
7-How does KVM differ from Xen? Why did IBM stop working on Xen and start working on KVM?
There is a tipping point at which a disruptive technology takes off in the marketplace, displacing other technologies that may have previously been believed to be firmly entrenched. For open source solutions, this occurs when there is a compelling reason, such as cost, to switch from proprietary technologies, after it has been proven that the new technology delivers enterprise compatibility, ecosystem support, and the high quality and reliability required for enterprise adoption. We have already witnessed that tipping point with Linux - and we are seeing that same story play out now with KVM. As a result, it’s clear that 2012 is shaping up to be the break-out year for KVM.
Creating the backdrop for KVM to emerge in this leading role on the enterprise virtualization stage are two key factors. One is that there continue to be more un-virtualized servers sold than virtualized servers, as documented in a recent IDC webinar. This means that there is still an advantageous market situation for virtualization technology. If people think that the game is over, they are wrong. There are still many organizations in the process of adopting virtualization and there is still a lot of opportunity
The second contributing factor is the unrelenting pressure on organizations to control costs. Customers are looking to reduce their overhead while maintaining the enterprise quality of their infrastructure software. This is very clear. From the start, people looked at virtualization as a way to help them to eliminate underutilized resources and consolidate and share services better, thus lowering their cost. But what they don’t want to do is end up paying a lot for virtualization software as well. We are hearing from customers about their concerns regarding the high cost of proprietary virtualization software and we know that this is becoming a big pain point for them. The pressure to control cost is driving organizations to virtualization in general, and then steering them in particular to KVM.
Still, the trend toward server virtualization has existed for several years, and the pressure to contain costs as well is certainly not new.
Why, then, is 2012 shaping up to be the big year for KVM in particular?
There are three main reasons.
For one thing, Windows servers have virtualized faster than Linux servers, but now KVM is embedded in all of the enterprise-proven Linux distributions including Red Hat and SUSE, as well as in Canonical Ubuntu, making it easy for Linux servers to be virtualized. According to a recent Canonical survey, KVM is now more popular than Xen as a virtualization hypervisor among survey respondents.
In addition, more and more customers are looking at dual virtualization strategies, in which they are adopting KVM in addition to VMware. Why? Cost containment. They have an installed base of VMware, but now that KVM is available and enterprise-ready, they see that for the next wave of server virtualization they can further reduce expenses. The thing that enables this to happen is the availability now of simple and effective virtualization management. For example, IBM Systems Director VMControl which added support for KVM in 2011, enables customers to gain more from infrastructure-wide virtualization. Customers can reduce the total cost of ownership of their virtualized environments – servers, storage and networks – by decreasing management costs and increasing asset utilization. VMControl enables the management of virtual environments across multiple virtualization technologies and hardware platforms, from a single pane of glass, enabling users to benefit from a multi-hypervisor virtualization strategy
IBM’s new SmartCloud Foundation offerings allow organizations to install, manage, configure and automate the creation of cloud services in private, public or hybrid environments with a higher level of control than previously available in the industry. For example, since deploying IBM SmartCloud Provisioning, IBM customer Dutch Cloud, which has a mixed virtualization environment with both KVM and VMware servers, has seen its client base expand significantly while its administrative workload has been reduced. Now the IT team spends 80% of its time on client migrations and only 20% of its time on administration—a more than 70% decrease in administrative time. As a result, Dutch Cloud’s monthly recurring revenue has tripled twice in the last 6 months but their operational costs have remained flat.
KVM has now reached a level of security, enterprise-readiness, and strong third-party support that allows customers to deploy it with confidence and in addition, possesses a robust, and rapidly expanding ecosystem of supporting technology providers.
And so for all these reasons, it seems clear that 2012 will be a very big year for KVM: major Linux distributions are supporting KVM; enterprises are adopting KVM as part of dual virtualization strategies that have been further bolstered by the development of comprehensive management from a single pane of glass; and, KVM is proving it is secure, enterprise-ready, and has a strong ecosystem around it. Plus, its lower-cost provides a compelling reason for every enterprise large and small to seriously consider KVM. Just as it did for Linux, the tipping point for KVM has arrived.
Director, WW cross-IBM Linux and Open Virtualization
Virtualization+IBM 2700039S5C 3,104 Views
I am very excited to announce to the community the first educational webcast on KVM hosted by the Open Virtualization Alliance (OVA). You do not need to be a member of the OVA, the webcast is open to the public and free for you and your clients.
The OVA is a consortium of over 225 companies that was formed to foster the adoption of open virtualization technologies. Open virtualization is important to our clients because it enables technology choice based on business and technical requirements, avoids vendor lock-in to a single platform, and offers lower cost to virtualize and manage their virtual machines. Whether it’s enterprises consolidating infrastructure, deploying private clouds or service providers offering massive public cloud infrastructures to their clients, server virtualization is critical to the design. And for that reason, enterprises and service providers are looking for open and cost-effective virtualization solutions that meet a wide range of technical and business requirements.
KVM has emerged as an impressive enterprise-grade alternative to expensive proprietary virtualization solutions. Many leading independent software vendors (ISVs) are committed to KVM and KVM-based solutions. But much more remains to be done before the market fully realizes the benefits of this technology. And this is how the OVA helps with its first educational webcast.
Free and open is not the reason you should register for the webcast though, it is the amazing expertise we will have access to:
and more from OVA members Acronis and Fujitsu.
I invite you to join me to learn more about KVM as an Enterprise-Grade solution with record-breaking performance, superior scalability, advanced security and lower cost.
Let’s show our support of the OVA and invite our colleagues, clients, and friends of affordable and open innovation to this webcast.
Virtualization+IBM 2700039S5C 3,643 Views
The Challenge of Managing Multi-Platform Virtualization
By Alan Radding
For the past decade while virtualization has been experiencing widespread adoption it was considered an x86-VMware phenomenon. Sure there are other hypervisors, but for most organizations VMware was synonymous with virtualization. Even on the x86 platform, Microsoft Hyper-V was the also ran.
Virtualization, however, provides the foundation for cloud computing, and as cloud computing gains traction across all segments of the computing landscape virtualization increasingly is understood as a multi-platform and multi-hypervisor game. Today’s enterprise is likely to be widely heterogeneous. It will run virtualized systems on x86 platforms, Windows, Linux, Power, and System z. By the end of the year, expect to see both Windows and Linux applications running virtualized on x86, Power Systems, and the zEnterprise mainframe.
Welcome to the virtualized multi-platform, multi-hypervisor enterprise. While it brings benefits—choice, flexibility, cost savings—it also comes with challenges. The biggest of which is management complexity. Growing virtualized environments have to be tightly managed or they can easily spin out of control with phantom and rogue VMs popping up everywhere and gobbling system resources. The typical platform- and hypervisor-specific tools simply won’t do the trick. This will require tools to manage virtualization across the full range of platforms and hypervisors.
Not surprisingly, IBM, which probably has the most virtualized platforms and hypervisors of any vendor, also is the first with cross-platform, cross-hypervisor management in Systems Director’s newest version of VMControl, version 2.4. This is truly multi everything management. From a single console you control VMs running on x86 Windows, x86 Linux, and Linux on Power. And it is agnostic as far as the hypervisor goes; it can handle VMware, Hyper V, and KVM. It also integrates with Microsoft System Center Configuration Manager and VMware vCenter.
The multi-platform VMControl 2.4 dovetails nicely with another emerging virtualization trend—open virtualization. In just a few months the Open Virtualization Alliance has grown from the initial four founders (IBM, Red Hat, Intel, and HP) to over 200 members. The open source KVM hypervisor the alliance is championing handles both Linux and Windows workloads, allowing organizations to avoid yet another element of vendor lock-in. One organization already used that flexibility to avoid higher charges by running the open source hypervisor for a test and dev situation. That kind of open virtualization requires the kind of multi-platform virtualization management VMControl 2.4 delivers.
Virtualization+IBM 2700039S5C 1,875 Views
Do you remember back when applications ran on machines that really were physical servers (all that “physical” stuff that kept everything in one place and slowed all your processes down)? Most folks are rapidly putting those days behind them. Server hypervisors and the virtual machines they manage have improved efficiency (no more wasted compute resources), freed up mobility, and ushered in a whole new “cloud” language.
Well, the same ideas apply to storage. As administrators catch on to these ideas, it won’t be long before we’ll be asking the question “Do you remember back when virtual machines used disks that really were physical arrays (all that “physical” stuff that kept everything in one place and slowed all your processes down)?”
To read more, check out Part I of an important new blog series from our Tivoli Storage team that explains the value of virtualizing storage resources. Also, stay tuned to Parts II and III that take this “storage hypervisor” concept further and into the cloud!
Yesterday, 7 industry leaders announced the formation of the Open Virtualization Alliance. (Click here to see the press release). IBM, BMC Software, Eucalyptus, HP, IBM, Intel, Red Hat and SUSE have created this consortium to foster the adoption of open virtualization technologies including Kernel-based Virtual Machine (KVM).
The Open Virtualization Alliance will promote examples of customer successes, encourage interoperability and accelerate the expansion of the ecosystem of third party solutions around KVM, providing businesses improved choice, performance and price for virtualization. The Open Virtualization Alliance will provide education, best practices and technical advice to help businesses understand and evaluate their virtualization options. The consortium complements the existing open source communities managing the development of the KVM hypervisor and associated management capabilities, which are rapidly driving technology innovations for customers virtualizing both Linux and Windows® applications.
So, why this focus on KVM? It’s all about choice and cost. KVM is an open source hypervisor that provides enterprise-class performance and scalability to run mission critical Windows and Linux workloads. Because it’s open source, KVM is a cost effective alternative. Because IBM and Red Hat stand behind it, it’s enterprise ready.
KVM is the most recent step in the evolution of open source x86 virtualization. Based on Linux, KVM is the most cost effective virtualization technology in the market. KVM scales and is secure, and delivers the benefits provided through the open source community, avoiding vendor “lock in”.
KVM is unique because it turns the Linux kernel into a bare-metal hypervisor using the hardware virtualization support in Intel and AMD processors. KVM runs Linux, Windows, and other types of virtual machines directly on hardware and as the x86 hardware virtualization support has improved, so has the performance of KVM.
A KVM hypervisor inherits the performance, scalability and security characteristics of Linux - which has been enterprise hardened for over 10 years and is trusted by millions of organizations in the heart of their data center to run their mission critical workloads. KVM scalability was recently demonstrated by a SPECvirt publication using a 64-core, 2 TB IBM x5 3850 server that achieved 336 actively running guests, more than twice the capacity of the nearest competitive result. KVM (RHEV) also rated a very credible # 2 in a recent Infoworld virtualization shoot-out.
Since KVM is based on Linux, its developers do not need to develop every feature from scratch, rather they benefit from relevant features in the Linux kernel. KVM naturally leverages the scheduler, memory management, power management hardware device drivers, platform support, and other features continuously being produced by the thousands of developers in the Linux community, giving KVM a significant "feature velocity" and broad source code review that other virtualization solutions cannot match. KVM has also helped to bring new features to the Linux kernel including kernel page sharing (KSM), transparent large page support, and a new user-mode device driver infrastructure.
IBM has brought into KVM its expertise and longstanding commitment to enterprise virtualization and open source:
· Development - IBM developers have provided key KVM features around performance, security, resource over-commitment, and reliability.
· Customer Solutions – IBM has large teams of engineers and architects dedicated to helping customers exploit the benefits of KVM in their enterprise. IBM’s many customer engagements have provided direct input to KVM enhancements.
· Product Support – A large and growing number of IBM products take advantage of open virtualization and KVM technology.
IBM software products that support KVM today include: IBM DB2, IBM ILOG, IBM Change and Configuration Management Database, IBM Lotus Domino Next, IBM Lotus Forms, IBM Lotus Web Content Management, IBM Maximo Asset Management, IBM Tivoli Access Manager, IBM Tivoli Asset Management for IT, IBM Tivoli Monitoring, IBM Tivoli Provisioning Manager, IBM Tivoli Service Request Manager, IBM Tivoli Storage Manager, IBM Tivoli Usage and Accounting Manager, IBM Websphere, and more ….
KVM has matured rapidly in recent years, and today offers an enterprise-class, open virtualization alternative on IBM System x®, BladeCenter® and other x86 servers. With top performance benchmarks, superior scalability and leading security capabilities, KVM offers both Linux® and Windows® customers a choice as well as lower costs.
KVM is Ready for Business.
Jean Staten Healy
Director, WW Cross-IBM Linux and Open Virtualization, IBM
Virtualization+IBM 2700039S5C 1,902 Views
"Virtualization without good management is more dangerous than not using virtualization in the first place."—Gartner Group
That's a blunt quote, but its point is well taken. Management should be an integral element of any serious virtualization strategy from the start.
If it's not, the virtualized infrastructure certainly won't deliver its full potential; in fact, it might even be a losing investment. Far from virtualization transforming IT for the better, it will have transformed IT for the worse.
This is why Managing Workloads is one of four key priorities in IBM's new virtualization framework—a modular approach designed to help organizations create a tailored virtualization strategy that will work best for them.
Some organizations may already have in place a modern array of management solutions designed specifically to get the best results from virtualization. For the rest, however, this particular module isn't one they're going to be able to skip.
Virtualization represents such a sea change in the nature of the infrastructure that effective management solutions must be deployed, configured, tested and in use from Day One.
IBM Software Defined 2700052JD4 Tags:  virtualization ibm virtualizationmanagement powerkvm ibmcloud cloud linux powersystems docker vm bluemix kvm hypervisor power8 redhat platform cloudcomputing paas 6,424 Views
IBM and Red Hat have been teaming up for years. Today, Red Hat and IBM are announcing a new collaboration to bring Red Hat Enterprise Virtualization to IBM’s next-generation Power Systems through Red Hat Enterprise Virtualization for Power.
Together, the integration of IBM POWER8 – with its capabilities for high performance – and Red Hat Enterprise Virtualization’s enterprise virtualization and management features provide a strong combination – particularly for larger enterprise deployments and mission-critical applications.
IBM Software Defined 2700052JD4 Tags:  powerkvm powersystems linux virtualization kvm hypervisor power8 5,979 Views
When IBM announced the new POWER8 processor and next-generation scale-out Power Systems earlier in 2014, PowerKVM was also introduced. The introduction was notable because it marked the first time IBM was providing open hypervisor technology – beyond the proprietary IBM PowerVM – on Power. By supporting KVM, IBM is removing a potential hurdle in adoption for users familiar with Linux and KVM on x86 to adopt the enhanced hardware platform.
The open source hypervisor KVM (Kernel-based Virtual Machine) offers many advantages in terms of cost, security and simplicity, and as a result, is gaining ground in the enterprise, particularly among organizations that already have Linux servers deployed in their data centers, or are interested in consolidating workloads or building a flexible infrastructure. In addition, this trend is also part of a larger movement toward deploying more than one type of hypervisor in a data center. This has been termed “hyperversity” by the Gabriel Consulting Group
With the rollout now of KVM on Power, IBM is committed to making it as easy as possible for both Power users who have not used Linux or KVM before, as well as existing Linux and KVM users – who are familiar with KVM on x86 or System z but not Power - to gain all the benefits of PowerKVM. In addition to the favorable economics of KVM virtualization, PowerKVM virtualization fully leverages POWER8’s symmetric multi-threading to achieve the highest performance possible with the hardware.
Providing an intuitive web panel with common tools for configuring and operating Linux systems, Kimchi and Ginger are add-on tools that are not required to manage a host or guests, but make the PowerKVM experience more user-friendly. In fact, Kimchi can also be used on x86 systems as well. Kimchi and Ginger for IBM PowerKVM 2.1.0 were released in June 2014.
The bottom line is that Kimchi and Ginger make it easier to adopt PowerKVM whether users have had prior Linux experience or not. That is the whole idea – to make it easier for users who do not have experience with Linux and virtualization to get on board and manage and use virtual machines using PowerKVM.
Aline Fatima Manera, Christy L Norman Perez & Paulo Ricardo Paz Vital
Staff Software Engineers, Open Virtualization, Linux Technology Center, IBM
IBM Software Defined 2700052JD4 Tags:  virtualizationmanagement docker platform vm ibmcloud redhat paas bluemix linux cloud virtualization cloudcomputing ibm kvm 6,460 Views
IBM supports many open source projects. One of the newest is Docker, a project for container-based virtualization that allows developers to encapsulate applications and dependencies and deploy them on Linux-based virtual machines.
The idea is that with Docker is that developers don’t have to worry about any application dependencies and can deploy containers for all the Linux machines that have Docker.
IBM Software Defined 2700052JD4 Tags:  linux power8 kvm hypervisor virtualization powersystems powerkvm 5,294 Views
PowerKVM provides hypervisor technology that is familiar to proprietary x86 virtualization users as well as committed Linux and KVM users.
This is a big change. Our goal with PowerKVM is to make it as simple as possible for someone that is non-Power-oriented to switch to Power and very easily pick up our systems, manage them and configure virtualization and get their Linux scale-out workloads running. The whole user experience has been very much aligned with what x86 provides from an administrator perspective. And, importantly, support for KVM allows users to select a single cross-platform virtualization technology, simplifying management.
We have taken the total PowerKVM offering and made it completely open source – all the way down to the actual firmware that is required to run PowerKVM.
Exploiting the POWER8 Hardware
In addition to the well-known cost advantage of KVM virtualization – which is considerable – as well as being completely open, the new PowerKVM virtualization exploits the unique features of the new POWER8 servers. For example, it exploits POWER8’s symmetric multi-threading with up to 8 threads per core. By leveraging the unique capabilities of the POWER8 servers, we allow workloads to get the highest performance possible from the hardware.
Over time, we will add support for more devices and add new features, continuing IBM’s tradition of a commitment to open technologies that dates back to the late 1990s. With this initial introduction of PowerKVM, our goal is to provide the simplicity and ease of use that is familiar to x86 virtualization users as well as committed Linux users. This is just the beginning.
IBM Software Defined 2700052JD4 21,134 Views
Kimchi is a spicy Korean side dish. It is also the code name for a new open source virtualization management project that offers sweet familiarity.
Kimchi is a new open source project aimed at providing an easy on-ramp for people who would like to start using KVM (Kernel-based Virtual Machine) but believe it will be too difficult. Kimchi is targeted at users who may have avoided the open source hypervisor because they don’t have experience with Linux or don’t have the ability to install a management server, or simply don’t have time to invest in Linux administration.
But unlike the spicy side dish Kimchi, the open source project Kimchi offers a taste of something sweet - a familiar user interface for virtualization management. Put simply, that is what Kimchi is all about - removing barriers to using KVM for a set of potential users.
Open Source Tool Designed to Appeal to VMware and Windows Administrators
There are certainly people in the enterprise who are Linux administrators and are perfectly comfortable with the way KVM is today. They regularly work with Linux admin tools and KVM fits right in to their day-to-day practice.
But there are also VMware administrators and Windows administrators who are not familiar with Linux admin practices and are not comfortable with the KVM tools. These people in particular will benefit from Kimchi, since the user interface is similar to that of VMware and Windows tools, thus helping to ease the transition to KVM.
Kimchi’s Role in the KVM Ecosystem
If you have one Linux server, then installing Kimchi on that server is quick and easy. Kimchi puts a thin layer over what is already there with KVM and Linux. You don’t need to install a separate management server. All you have to do point your browser to the KVM host and with just a couple of clicks, you can install your first guest and start running it.
While it does not come as part of KVM yet, it is hoped that Kimchi will be mature enough to be packaged up with some of the community Linux distributions in 2014, and then be included in some enterprise Linux distributions after that. The beauty of the Kimchi interface is that it boils management features down to their essence, simplifying everything, without a requirement that users have any Linux skills. And, it is rendered using HTML5 so there is total independence of both device and operating system, meaning that you can use Kimchi from a Windows or Linux work station, or a tablet or a phone.
Kimchi Reaches a Functional Milestone
Because it is a simple point-to-point management tool, it is not able to provide clustering or resource pooling. Users are limited to managing a few hundred virtual machines at a time, one host at a time.
Kimchi reached a functional milestone in October 2013 with the release of Version 1. Although it is still early in the development process for the project, it is now at the point where we think it has enough functionality for people to try it. The clear advantage is that users don’t need to maintain any management infrastructure - and they can get started using KVM right away.
IBM’s Commitment to Kimchi
IBM supports Kimchi because it represents another way to promote KVM adoption and remove barriers to open source virtualization, which IBM believes is a smart choice. Kimchi is a sound, multi-platform management tool. We, at IBM, are also using it to manage KVM on Power. It will come bundled with KVM on Power, available later in 2014.
Future Development Plans for Kimchi
At this point, the focus for Kimchi going forward is on community building and additional feature development. The input from the community will determine the future direction for Kimchi, which is an Apache-licensed project hosted on GitHub, and incubated by oVirt.org.
If you would like to learn more about Kimchi and get involved, go here.
IBM Distinguished Engineer and Chief Virtualization Architect Open Systems Development
IBM Software Defined 2700052JD4 6,313 Views
KVM (Kernel-based Virtual Machine) is technically excellent as a hypervisor across the board. The performance, scalability and efficiency, device support, ability to run different types of guests, and hardware support are all first-rate – and it is also integrated with Linux. At this point, the upstream development focus for KVM is on two fronts: first, to exploit new hardware (and that is not just an x86 proposition any longer) and second to make KVM easier to use and smart enough to take care of itself so that it requires less attention from the user to get the best performance. Here are five important upstream KVM features coming in future enterprise distributions that will help make that happen:
You can expect these new capabilities to be introduced into Enterprise Linux distributions sometime around the end of 2013 since it generally takes about 6 months for an upstream feature to get into an enterprise distribution. These are high-end features that go well beyond what can be done now with commercial hypervisors. And, importantly, they will be easy to use.
Mike Day - IBM Distinguished Engineer and Chief Virtualization Architect, Open Systems
IBM Software Defined 2700052JD4 2,443 Views
With the help of a robust ecosystem, open source technologies such as KVM become a force to be reckoned with.
What is it that causes some new technologies to gain wide acceptance while others simply fall by the wayside? It’s a given that in order to be meaningful, new technologies must be enterprise-grade, they must be cost-effective, and they must address a real need. And, at least in the open source world, the endorsement of a robust community is the other critical factor. KVM (Kernel-based Virtualization Machine) is a case in point.
KVM has made great progress since its inclusion in the Linux kernel in 2007, observes analyst Gary Chen in a recent IDC white paper. In addition, he notes, the strength of KVM as well as its ecosystem makes KVM an increasingly attractive virtualization choice for customers that rely on Linux and beyond.
The point is: You may have a product, but if you don’t also have an ecosystem, you will hit the “so what” factor. In essence, there is not a complete solution – at least, not until there is a community around it. And the more individuals and companies that contribute code to an open source initiative, include the technology in their products, and provide services related to it, the more polished the solution stack becomes.
Take a look at the ecosystem around KVM and you will find a range of robust communities that aim to address a specific area or requirement. IBM, which has backed open standards and open source technologies for a long time, is a founding member of each. And of course KVM itself is developed by an open source community.
The OpenStack Foundation, for example, is a recent entrant into the open source ecosystem around KVM. Launched as an independent foundation in 2012, the goal of the OpenStack Foundation is to foster cloud interoperability. The OpenStack Foundation serves developers, users, and the entire ecosystem by providing a set of shared resources to grow the footprint of public and private OpenStack clouds. To date, the foundation has more than 9,800 individual members from 87 countries – and has also secured more than $10 million in funding.
The Open Virtualization Alliance, launched in May 2011, is a consortium committed to fostering the adoption of open virtualization with KVM. To date, the OVA counts more than 250 vendors from all over the world among its membership. The consortium advances awareness and understanding of KVM, drives adoption of KVM-based solutions, and helps promote interoperability and best practices to accelerate the expansion of the ecosystem of third-party solutions around KVM – giving enterprises improved choice, performance and price through open virtualization with KVM.
Modeled after the Apache Foundation, Eclipse, LVM, and many other open source communities, the oVirt Project, was launched in December 2011. oVirt develops and distributes an open source virtualization management platform that combines the KVM hypervisor with capabilities for hosts and guests. In this way it supports organizations looking for open alternatives to traditional virtualization technology, both for the hypervisor and virtualization management.
Some individuals and organizations – like IBM – are involved with all three of these groups. Others select the one that meets their own unique interests or needs. But while there is an open invitation to participate, make no mistake – open source communities are merit-based systems. This is a good thing – the communities provide a stimulating combination of competition and cooperation – creating what we call “a friction of ideas.” And this is what ultimately results in high-quality, well-vetted products.
Don’t miss out on the opportunity. Get involved!
Adam Jollans - Program Director, Worldwide Linux and Open Virtualization Strategy, IBM
IBM Software Defined 2700052JD4 7,572 Views
Before we jump directly to the headline, let me explain one of the most important metrics of virtualization: VM density - the number of virtual machines running on a host. Virtualization is often used to reduce the hardware required to operate a data center, and the more servers one can consolidate on to a virtualization host, the fewer host systems one needs. This of course results in much lower operating costs due to fewer required software licenses, less energy consumed, considerably less space needed, and fewer administrators. So, in order to decide which solution is best for you, it's vitally important to be able to compare VM density among different virtualization solutions. One of the ways you can do that is with the industry standard virtualization benchmark, SPECvirt_sc2013. You may have heard of its predecessor, SPECvirt_sc2010, which was released in [of course] year 2010. And you may be aware that KVM quickly dominated the results produced with that benchmark earning the top score in every server configuration that has been used to produced results: 2, 4, and 8 socket systems.
SPECvirt_sc2010 has been a good representative workload, but like many benchmarks, its relevancy can erode over time as technology and user's actual workloads change. The IT industry is continually evolving, and newer benchmarks are needed to approximate the workloads that are relevant. The evolution of SPECvirt from sc2010 to sc2013 is an example of staying relevant. So, how is sc_2013 different? Three ways set it apart from sc_2010:
So, how well does KVM do with sc2013? Let's take a look at two results, one using IBM Flex System x240 server with Red Hat Enterprise Linux 6.4 with its Kernel-based Virtual Machine (KVM) hypervisor , and another using HP ProLiant DL380p Gen8 server with VMware ESXi v5.1 . Both systems under test are equipped with Intel E5-2690 processors, 256 GB of memory, and SSD for storage.
Care to guess how much higher the VM density was for IBM KVM solution?
No, even higher... 37% more VM density!
That is a "stop what we are doing and choose a different strategy" result. This IBM - Red Hat Enterprise Linux 6.4 with KVM solution consolidated 37 virtual machines, while the HP - VMware result only consolidated 27 VMs. Imagine the reduction in both hardware and virtualization licensing cost you could achieve by moving from VMware to KVM. Learn more about IBM open virtualization and KVM here.
Senior Software Engineer, IBM Linux Technology Center
IBM Software Defined 2700052JD4 Tags:  softwaredefined bigdata smartercomputing virtualization cloudcomputing itinfrastructure datacenter sde softwaredefinedenvironmen... 2,780 Views
Virtualization+IBM 2700039S5C 4,703 Views
At the Linux Technology Center, our focus has shifted over the years. While initially, the LTC’s emphasis was largely centered on Linux, the scope has expanded over the years. When we started, we spent a lot of time working to make sure that all of IBM’s products worked with Linux so that Linux ran well on our different families of servers - x86, Power, and mainframes, helping the IBM Software Group take advantage of Linux for their hundreds of software products, and sometimes stepping in with services to make sure that they could deploy Linux in their engagements.
From that, we became involved in helping Linux move into new areas. We worked with customers that were interested in deploying Linux for scale-out file systems and utilizing real-time Linux, and helped make enterprise requirements like Linux high performance and scheduling a reality. Over the years, the LTC has worked on open source development well beyond the kernel in areas as diverse as RAS (reliability, availability, serviceability), device support, networking, systems management, security, Samba networking protocol, the toolchain, standards, test and quality. Now that Linux features are mature, we are turning our attention to the new frontiers of open source innovation – big data, cloud, and mobility.
Over the course of our involvement in open source, we have helped launch consortiums as a way to bring companies together and get projects moving quickly – probably more quickly than they would have if they had developed organically. For example, we were involved in the formation of Linaro, which was focused on Linux for ARM processors that are used in cell phones, cars, and embedded in other devices. And, most recently, we helped kick-start OpenDaylight, a project under The Linux Foundation focused on a common software-defined networking platform. The result of all this work with different open source paradigms is that inside IBM, as well as externally, we are recognized for our expertise both technically and organizationally.
Because of the LTC, IBM is known as being good at working with open source initiatives – we know how to leverage it, the proper way to partner, and, when there is new open source technology that is emerging, people often come to us for help in pulling the project together in a cohesive way. The LTC has become a locus for people to gain assistance in solving their own problems or “scratching their own itch.” Ultimately, that is good for IBM – and something we all can benefit from. That’s what “community” means.
Director, IBM Linux Technology Center
Virtualization+IBM 2700039S5C 4,894 Views
Linux Evolves and So Does IBM’s Linux Technology Center
At the IBM Linux Technology Center (LTC), we sometimes forget – because we have been around so long – that for some, the LTC is “new” news. Thanks to the success of Linux and other open source projects, there are people continually joining the open source technology ecosystem. Often, they don’t know our history, so we want to explain how we act as a resource for not only IBM but also for our partners and customers.
In the late 1990s, IBM had begun using open source software in a number of areas - especially the Apache Web Server which IBM was using internally and considering using in its products. IBM’s research teams were doing more and more with open source software and Linux, and our high performance computing customers were beginning to become interested in open source software and Linux, as well.
In 1998, Dan Frye, Vice President, IBM Open Systems Development, took the lead in ascertaining what the company’s participation in open source software should be. Through that effort, the plan to make a substantial commitment to Linux for IBM products and for Linux itself came to fruition. In 2000, IBM decided to invest $1 billion in Linux, and to help improve the operating system by working within the community. The Linux Technology Center was born out of that investment, and I am happy to say, many other companies subsequently became involved and there was an explosion of development around Linux.
The LTC provides a Linux operating system development team for IBM, supporting all IBM server platforms, all IBM server software, and acting as the technical liaison to our Linux distribution partners. IBM is part of the Linux open source community, and works directly with Linux distributors.
The team of developers working with the LTC grew fairly quickly from just a dozen, to 50, to a hundred, to several hundred developers today. Initially, we were looking at basically understanding open source and trying to make meaningful contributions. We were working to make Linux a better operating system for the kinds of things that we knew our IBM customers would want. In those days, that was reliability, scalability, better testing, performance, I/O support – even documentation – and as we did that, we began to understand Linux better and started to use it more widely internally at IBM.
The announcement of IBM’s $1 billion investment and the early work we did enabled Linux to gain acceptance by many large enterprise customers that might have been slower to come to Linux had IBM not aggressively supported it. Today, the Linux focus for the LTC is evolving. For example, we initially worked on the printing subsystem because that was an inhibitor to open source adoption, but that is a done deal now. The things we have to spend time on have completely changed and our efforts tend to be much more strategic these days.
While we continue to channel our efforts to some of the same areas such as making sure Linux supports IBM Power Systems and IBM System z, we are also becoming involved in new open source efforts. It is part of a natural evolution. Linux has grown up.
More about what the LTC is working on now in my next blog.
Director, IBM Linux Technology Center
IBM Software Defined 2700052JD4 5,917 Views
The open source hypervisor KVM (Kernel-based Virtual Machine) is gaining ground in the enterprise. KVM adoption echoes the early days of Linux since organizations, by now familiar with server virtualization, are evaluating not only hypervisors from the current market leaders but also open source approaches. According to data from IDC, KVM is growing at 150% year over year in terms of unit shipments, with over 100,000 servers already using it worldwide for virtualization.(1)
Expanded use of KVM is also occurring as part of a broader trend in which organizations are opting to deploy more than one hypervisor in their data centers. Termed “hyperversity” by Gabriel Consulting Group, organizations are avoiding standardization on a single hypervisor, and are increasingly willing to select the right tool at the right price.
A strong area for KVM is among organizations that already have Linux servers deployed in their data centers, and who are looking to consolidate workloads or build a flexible infrastructure. The reasons for KVM’s early adoption among current Linux users are varied, but can be distilled down to three main considerations – cost, security, and simplicity.
Since 2007, when KVM was first distributed as a core part of the Linux kernel, it has been considered a mainstream feature of Linux by enterprise users. Today, KVM is shipped with the major enterprise Linux distributors, including Red Hat, SUSE, and Canonical. This enables Linux shops to reduce the cost of ownership of virtualization, since they do not have to purchase a separate hypervisor. KVM can also support high server utilization, resulting in greater asset utilization, which in turn also results in cost efficiency.
Security is a concern for all organizations and KVM has distinct strengths in this area. SELinux (Security Enhanced Linux) enables Mandatory Access Control which delivers advanced, need-to-know security. Explicit permission is required for access to specific data and functions, rather than permissions being role-based. In addition, EAL4+ certification means that KVM is ready for adoption by governments and other organizations where security certification is required. This isolation is critical, for example, if a malicious program is trying to break out of its own virtual machine to access the host or another virtual machine. With the combination of both SELinux and its EAL4+ certification, KVM provides strong enterprise-level security.
Proven KVM Success
In the six years since it became a core part of Linux, KVM has had time to earn users’ trust. When a technology is new it tends to be mistrusted, but greater acceptance is building now as more enterprise use cases for KVM are documented and shared.
Jean Staten Healy - Director, Wordwide Linux and Open Virtualization, IBM
(1) Source: IBM press release “IBM To Open Cloud Lab For Wall Street Clients”, April 8. 2013
Virtualization+IBM 2700039S5C 4,444 Views
As virtualization has grown to become a reliable mainstream approach to reducing costs, maintaining or even expanding performance and delivering flexibility to support business needs, it has become strategic to IT organizations around the world. At the same time, Red Hat and IBM have become leaders in Kernel-based Virtual Machine (KVM) development and promotion, and Red Hat has distinguished itself by delivering the KVM hypervisor and corresponding management tools in Red Hat Enterprise Virtualization.
According to a recent whitepaper by analyst firm IDC entitled “KVM : Open Virtualization Becomes Enterprise Grade”, sponsored by IBM and Red Hat, KVM has made impressive progress since its inclusion in the Linux kernel in 2007, and adoption has grown especially in key use cases such as Linux server consolidation and cloud computing. The IDC whitepaper states that virtual servers outshipped physical servers by a ratio of more than 2:1 in 2012. The firm’s numbers also report that 55% of all installed workloads as of the end of 2011 were virtualized and new workloads are being virtualized at a rate of 67%. IDC also finds that hypervisors competitive to VMware, such as KVM, are offering enterprise customers more and more choice.
Red Hat and IBM’s long collaboration, originally formed around Red Hat Enterprise Linux, has expanded to focus on virtualization as well. The two industry leaders began collaborating around open virtualization many years ago, and this has continued to evolve with the fast pace of innovation delivered through KVM. Both organizations play a leadership role in the Open Virtualization Alliance (OVA), which they helped to form in 2011 as founding members. The OVA promotes the growth of KVM’s ecosystem in the marketplace, and as membership in the OVA has grown and become more diverse, it has opened opportunities for KVM deployment in areas such as: server, storage, networking, management, OS, security, business applications and has established itself as one of the most popular foundations for Infrastructure-as-a-Service (IaaS) clouds. IBM has also utilized KVM and Red Hat Enterprise Virtualization as the underpinnings of its own public cloud offering, IBM SmartCloud Enterprise.
In mid 2012, Red Hat Enterprise Linux 5 and Red Hat Enterprise Linux 6, in conjunction with the KVM hypervisor on IBM Systems, were each also separately awarded Common Criteria Certification at Evaluation Assurance Level 4+. These certifications paved the way for the KVM hypervisor and open virtualization to be used in homeland security projects, command-and-control operations and throughout government agencies.
Graphs in the IDC whitepaper also show that many users would like to combine multiple hypervisors, with as many users choosing to deploy an open source secondary hypervisor as those who would deploy a proprietary one. Over half of those surveyed said they would choose to build a cloud on a new hypervisor, as opposed to their existing system.
These proof points signal a bright future for KVM, as open virtualization takes its place in the enterprise.
Learn more about IDC’s perspective on the virtualization industry in the IDC whitepaper, sponsored by IBM and Red Hat, “KVM : Open Virtualization Becomes Enterprise Grade” February 2013.
Radhesh Balakrishnan, global leader, Virtualization, Red Hat
Jean Staten Healy, Director, Worldwide Linux and Open Virtualization, IBM
Virtualization+IBM 2700039S5C 3,769 Views
Several key areas of strong adoption have emerged for the open source hypervisor.
Over the past year, we have seen a marked shift in the conversation around KVM (Kernel-based Virtual Machine). Questions early on focused on whether the open source hypervisor could be trusted as an enterprise-grade virtualization solution. We think that question has been answered with a resounding “Yes, KVM is ready for business!” Most recently, we demonstrated with a first-ever virtualized x86 TPC-C benchmark result that even the most demanding and complex workloads can be virtualized – with KVM. Nothing though speaks better to adoption in the enterprise than clients actually using it. Today, many IBM clients have deployed KVM with IBM hardware and/or software and you can read their success stories here.
Now, the questions around KVM have changed. Today, clients want to understand where KVM is being used, who is using it, and why it is being used. In answering their questions, we have identified several areas in which KVM is frequently adopted. Here, a brief look at a few use case scenarios leading the way in KVM adoption.
Companies with Linux servers in their data centers – KVM is the natural choice for companies that already have Linux servers since:
Cloud service providers and organizations building their own private clouds – At its core, a move to the cloud is about cutting costs and enabling flexibility. Both cloud service providers and organizations building private clouds need:
Multi-hypervisor environments – After organizations have become familiar with server virtualization, they are often open to the idea of a second hypervisor, particularly if it can provide them with expanded benefits and lower costs. According to the Gabriel Consulting report, “‘Hyperversity’ Rages On,” based on Gabriel’s annual and independent x86 Data Center Survey, two-thirds of the respondents are using two or more hypervisors. Many organizations have a second-source policy for their major IT components, including hypervisors.
Virtual Desktop Infrastructure (VDI) – KVM’s strengths shine in the virtual desktop arena because of the weight placed on sharing resources, high reliability, security, and performance. Vissensa, a managed service provider, successfully provisions flexible desktops using a virtual desktop solution with a KVM implementation of Virtual Bridges VERDE.
Business-Critical Applications –Organizations that want to create a responsive business infrastructure for the future are increasingly seeing benefits in expanding virtualization to their mission-critical systems. For example, using Red Hat Enterprise Virtualization, Casio Computer Company has not only decreased its costs but also addressed business management challenges, and laid the groundwork for a future cloud environment.
KVM’s Place in the Enterprise
While there is still room for further growth in terms of KVM’s market penetration, these five use cases represent a base in the enterprise on which KVM is building a strong presence. We will explore each of these use cases in more detail in future blogs.
Jean Staten Healy - Director, Worldwide Linux and Open Virtualization, IBM
Virtualization+IBM 2700039S5C 4,094 Views
A recently posted entry on this blog titled "What If You Could Virtualize Your Entire Enterprise? And Why Your Virtualization Journey Is Only Beginning" posed the following question:
"What if...you could virtualize your mission-critical applications while ensuring or improving service levels?"
While virtualization has made significant penetration into the data center, there are still workloads which have yet to fully exploit it's benefits. The previous blog entry identifies some of these workloads as database, OLTP, analytics, and ERP. In many cases these workloads have yet to be migrated to virtualized environments due to logistical issues or performance concerns. In order to demonstrate that virtualized environments can host these types of enterprise workloads without incurring significant performance sacrifices, sound, realistic proof points need to be established that highlight these type of capabilities.
In an effort to define such a proof point, the IBM Linux Technology Center (LTC) recently completed the first ever formal publication of the TPC-C benchmark, which showcases an OLTP workload, in an x86 virtualized environment. In this proof point, a two socket Intel Xeon system (IBM x3650M4) was able to achieve 1,320,082 transactions per minute (tpm-C) while performing in excess of 300,000 I/O operations per second. This level of performance exceeds 94.8%* of the posted two socket TPC-C publication at this time and is only 12.2% lower than a separate IBM publication published just last year that obtained a score of 1,503,544 tpm-C on a similarly configured non-virtualized two socket Intel Xeon system. The virtualized TPC-C publication also achieved a price/performance ratio of $0.51/tpm-C which is lower than the comparable non-virtualized system which had a ratio of $0.53/tpm-C and the lowest price/performance ratio ever achieved by IBM.
To achieve this exceptional level of performance and price/performance, this publication exploited the integrated KVM virtualization technology found in Red Hat Enterprise Linux 6.4 and the virtualization friendly x3650 M4. IBM’sSystem x3650 M4 is designed to support virtualization of customer's most important business workloads as it is designed to deliver outstanding uptime, performance, scalability, and I/O flexibility and rock-solid reliability.
With the enterprise advancements now available in KVM as demonstrated by this proof point, customers can and should begin the transition of their mission critical applications to virtual environments. As the virtualization platform used to produce the first ever virtualized x86 TPC-C publish, the KVM technology in Red Hat is ideally suited for this role.
* 58 two socket TPC-C publications as of 2/25/2013
Director, IBM Linux Technology Center
Virtualization+IBM 2700039S5C 2,836 Views
How does your organization define the value of virtualization solutions? Are you calculating your return on Investment (ROI) strictly based upon cost savings? If so, you might be missing out on the true benefits of virtualization.
Let’s rewind for a second. Historically, server virtualization was the first step organizations took in an attempt to save costs. Identifying and eliminating under performing servers in the IT infrastructure helped recapture floor space and reduced costs associated with software licensing, cooling and power. Server consolidation even made it easier for IT administrators to increase their productivity. A reduction in servers and associated management enables a greater focus on projects strategic to the enterprise.
Server virtualization opens the door to efficiency but it is only the beginning of the virtualization journey. If you are satisfied with achieving these very basic results, you might be missing the whole point of virtualization (and the benefit too). Consider the potential benefits of virtualizing the entire enterprise?
Let’s fast forward to today. Cost savings, once the primary driver of technology shift and the primary method to calculate ROI for virtualization is being replaced. Today’s industry leading companies are defining value differently. The trend of moving complex, mission critical workloads to the cloud as a way to accelerate delivery of new products and services to market, and the need to ensure each workload performs at its peak and is now what defines value.
At IBM Pulse 2013, Jacqueline Woods, Vice President Marketing, along with a panel of Industry Analysts and Clients will discuss a more effective way to calculate ROI or the value of comprehensive IT data center virtualization (i.e. - server, storage, network and mission-applications) meets these needs through a combination of heterogeneous architectures, dynamic optimization, enterprise virtualization and unified control. A mix that offers the flexibility to meet the changing nature of business critical workloads and something only IBM can offer.
Want to learn more and join the discussion. Come visit us at IBM Pulse 2013 and register for IBM Pulse session #CSM-2390: What Happens When You Add IBM Systems and Expertise to the Software Defined Data Center?
Virtualization+IBM 2700039S5C 2,659 Views
How does your organization define the value of virtualization solutions? Are you calculating your return on Investment (ROI) strictly based upon cost savings? If so, you might be missing out on the true benefits of virtualization.
Let’s rewind for a second. Historically, server virtualization was the first step organizations took in an attempt to save costs. Identifying and eliminating under performing servers in the IT infrastructure helped recapture floor space and reduced costs associated with software licensing, cooling and power. Server consolidation even made it easier for IT administrators to increase their productivity. A reduction in servers and associated management enables a greater focus on projects strategic to the enterprise.
Server virtualization opens the door to efficiency but it is only the beginning of the virtualization journey. If you are satisfied with achieving these very basic results, you might be missing the whole point of virtualization (and the benefit too). Consider the potential benefits of virtualizing the entire enterprise?
We’ll continue this discussion next week by highlighting other ways to drive efficiency through virtualization and give you a glimpse into the software-defined future of the data center.
In the meantime, please share your thoughts with us on how your organization defines the value of virtualization? And, challenges you encounter calculating virtualization ROI.
Lastly, at IBM Pulse 2013, Jacqueline Woods, Vice President Marketing, along with a panel of Industry Analysts and Clients will discuss a more effective way to calculate ROI or the value of comprehensive IT data center. We would welcome hearing your thoughts before and during the session. You can register for Jacqueline’s session (CSM-2390): What Happens When You Add IBM Systems and Expertise to the Software Defined Data Center?”
What If You Could Virtualize Your Entire Enterprise? And Why Your Virtualization Journey Is Only Beginning
Virtualization+IBM 2700039S5C 3,422 Views
Server virtualization has been instrumental in helping enterprise IT to shrink their data center footprint. The key enabling technology of server virtualization, the hypervisor, is a layer of software that supports multiple operating environments on the same physical system. With virtualization companies are now able to quickly provision new applications using existing infrastructure to speed time-to-market and to consolidate many servers to increase utilization and reduce costs. More applications now operate on virtual servers than on physical systems, which is a testament to how pervasive virtualization has become in the enterprise.
Yet, a closer look at virtualization reveals just how confined it is in the data center. Looking at the IT solution stack of compute, storage, networks and middleware, only compute resources have been transformed by virtualization. Furthermore, the universe of applications running on virtualized infrastructure is dominated by simpler, non-business-critical workloads. It turns out that virtualization is much, much bigger than most have been led to believe.
What if….you could virtualize your mission-critical applications while ensuring or improving service levels? While many still associate virtualization with workload consolidation, top benefits also include faster time-to-market for new services, greater infrastructure flexibility, cost-effective business continuity, and trusted resiliency and security. Workloads such as databases, OLTP, analytics and ERP are the next frontier of applications yet to be virtualized, and these applications require a different approach to virtualization than simply extending existing platforms. (Learn how a Consumer Products company used virtualization to improve business continuity and application performance)
What if….you could take the benefits realized from server virtualization and apply them to the rest of the data center? Storage, networks and application infrastructure are ripe to be virtualized to ensure greater resource utilization, provide faster response to business change, and simplify data center management. This next wave of virtualization requires a comprehensive approach to IT that goes beyond hypervisors. In this new world order, IT infrastructure will need to support many more dynamic workloads and leverage automation to continuously and optimally manage the environment. (Read how a world leading manufacturer deployed virtualization to streamline operations, cut costs and achieve faster response to business requests for new capabilities)
As you plan for this next phase of virtualization, make sure you invest in technologies that enable your data center to be trusted, efficient and simplified. A trusted infrastructure allows for the highest qualities of service and security that are necessary for mission-critical applications. An efficient infrastructure provides maximum utilization of resources, including staff, productivity, energy, space and IT assets. A simplified infrastructure fully integrates data center management to manage the complete IT lifecycle.
IBM will guide you along this virtualization journey. In an upcoming series of blog posts, we will highlight specific virtualization technologies that are essential to your organization and demonstrate how leading-edge enterprises are investing in the latest virtualization solutions. In the meantime, happy virtualizing!
IBM Virtualization Strategy Leader, IBM Systems Software
Virtualization+IBM 2700039S5C 3,824 Views
New support makes it more affordable to move to the cloud.
IBM SmartCloud Entry, which IBM launched in October 2011, now supports the open source KVM (Kernel-based Virtual Machine) hypervisor. This means that with just one product, customers can support virtual machines that are running on KVM, VMware, and/or PowerVM.
Cloud computing is one of those technologies for which people have differing definitions, but SmartCloud Entry does the most basic things that just about everyone expects from a cloud solution: web-based provisioning for a cloud, and a pay-as-you go model. SmartCloud Entry supports those two critical capabilities with a couple of key attributes that decrease risk to organizations. First, it makes it really simple for a company that is new to the cloud to get started with a private cloud safely and behind their own firewall, and second, it does so with pricing that is completely affordable for the mid-market.
Think of the self-service provisioning enabled by SmartCloud Entry compared to the days of putting in requests to the IT department as being analogous to using a vending machine versus waiting on line in a cafeteria to place your order. With the vending machine, you walk up and push a button. You know what is possible ahead of time - as well as the price - and what you select is made available immediately.
In addition, the pay-as-you-go model enables organizations to pay for computing services on a utility basis, as opposed to the old capital expense model where they had to invest in new hardware and software and before it was over, the whole process took weeks to get up and running. SmartCloud Entry’s basic metering enables administrators to charge back to users based on who used which workload and for how long.
Of course, the prerequisite for any cloud is a virtualized environment and one of the unique differentiators of SmartCloud Entry is that it works with multiple virtualization solutions. On IBM Power Systems, it works with PowerVM, and on IBM System x or other x86 servers, it can support VMware and now also KVM, and in the future, that support will be extended to other hypervisors, including Microsoft Hyper-V.
Increasingly, organizations are using multiple hypervisors within their IT environments. They may have started with a proprietary hypervisor three or four years ago when virtualization was relatively new, but now that their confidence has increased and they know exactly what they require and what they don’t, they are choosing the hypervisor that meets their needs affordably, and where it is appropriate. In fact, Gabriel Consulting Group - which terms this trend “hyperversity” (a mash-up of “hypervisor” and “diversity”) - found in its latest data center survey that two-thirds of enterprise respondents are using a minimum of two virtualization mechanisms, and that those selections were based on technical and/or cost considerations.
SmartCloud Entry with KVM Pilot Program in China
IBM tested this new release of SmartCloud Entry with KVM support for six months in China - where the company also recently opened its very first KVM Center of Excellence. There is a huge amount of interest in KVM and open source software in general in China. IBM chose that region for our test for a number of reasons. While that market is adopting virtualization later than others, the benefit from their point of view is that they have not been dominated by proprietary technologies as have other regions. As a result, they have more choice, and there is more rapid uptake of KVM. One of the other interesting things noticed during the test was that about half of all customers deploying SmartCloud Entry had a mixed hardware environment, and they liked the fact that one cloud solution spans all their platforms (since SmartCloud Entry supports Power and System x, as well as IBM PureFlex systems).
There is a huge market opportunity in China based on where organizations are on the maturity curve and there is quite a bit of cost pressure also. When I go to China, every conversation is about cloud and how to utilize cloud for efficiency and cost savings, so getting clients on their way to more efficient utilization of cloud technologies starts with the premise of virtualization and then this notion of self-service provisioning and metering. Also, implementing a cloud solution on a very cost effective platform is resonating extremely well in China. As Zhou Yuan, from China Network World observed when IBM launched the new KVM Center of Excellence, “The cloud computing market is growing very fast and hence virtualization technology drew much attention from the market. The announcement of KVM breaks the monopoly and provides a new option for clients - an option with lower cost.”
The Broad SmartCloud Portfolio
We know that for some customers, particularly those in the mid-market, SmartCloud Entry may provide all the features they will need for some time, while others will take advantage more quickly of the pathway to other IBM SmartCloud products. By its very nature, the cloud is about flexibility and elasticity, and that is why it is important to be able to make technology choices that align with those attributes. And that is also why, at IBM, we have a whole portfolio of products under the SmartCloud banner. SmartCloud Entry is the starting point.
Virtualization+IBM 2700039S5C 6,496 Views
First-ever KVM Center of Excellence is located in Beijing where virtualization is being adopted rapidly.
IBM opened its first-ever KVM Center of Excellence focused on open virtualization with KVM (Kernel-based Virtual Machine) event in Beijing, China last week. I participated in the KVM Center of Excellence launch event in Beijing, and was really excited at the level of interest we got from the Chinese media and analysts that participated in the event. We are already seeing some good media coverage for KVM in China (see IBM press release here).
IBM executives reveal the opening of the first KVM Center of Excellence - Beijing, November 28, 2012 (from left Jean Staten Healy, Kelly Beavers, Dominic Tong, Josephine Cheng)
With this new KVM Center of Excellence, we are leveraging IBM’s KVM expertise in China to make it easy for clients and partners to have access to briefings, to enrich their staffs with training, and to conduct proofs of concept of their own solutions. We chose China for this first KVM Center of Excellence, because of the tremendous growth of virtualization deployments in China. According to IDC, virtualization is growing faster in China than the rest of the world, making it critical for these organizations to get the information they need now. In short, the goal is to enable organizations to become educated about open virtualization and particularly KVM so they can make smarter choices about the virtualization technology they rely on.
For several years now, IBM products and initiatives have been associated with a theme of building a smarter planet. At the most fundamental level, this basically means enabling better decision making with intelligent analysis based on hard facts and detailed information. For us here at IBM, this is not a marketing slogan, it forms the basis of our efforts. In turn, the KVM Center of Excellence located in Beijing will allow both businesses and government organizations to learn about the realities of open virtualization not only from IBM but also its enterprise Linux partners including Red Hat and SUSE, who are actively participating in the center by making their software – and their expertise – available.
IBM understands virtualization from the
ground up. In fact, IBM actually created the concept that is now known as
virtualization in 1964, by implementing a hypervisor on the IBM /360 mainframe.
If the main point of server virtualization is to more efficiently share the
resources of IT systems in order to lower costs and create a more productive
and flexible environment, then open virtualization with KVM takes that approach
even further on x86-based systems. It allows organizations to avoid the high
cost of proprietary virtualization technologies as well as avoid vendor lock-in
which can limit their choices in the future. KVM is separate and distinct
from other virtualization technologies because it is built on the Linux kernel,
and because of that connection, benefits from the vast worldwide open source
community of programmers that have contributed to Linux. IBM has for many years
committed developers contributing code to Linux development, further enhancing
its deep understanding of the kernel, and has been contributing to KVM since
2007. Today, IBM has dozens of programmers and engineers worldwide working on
KVM as part of the open source community. IBM contributes to KVM across a broad
range of development areas, including performance, security and cloud computing.
A key value proposition for KVM is the
ability to reduce the overall cost of ownership for virtualization solutions
(read more about that here).
But, beyond the significant advantage of lower cost, KVM offers additional advantages
in terms of security, performance, and cloud computing. There are also a range
of virtualization management solutions to support it.
In terms of security, the Mandatory Access Control security delivered by SELinux and leveraged by KVM goes beyond the Discretionary Access Control enabled by other hypervisors to provide isolation between virtual machines. IBM and Red Hat also recently announced that KVM has achieved Common Criteria Certification with EAL4+ (for RHEL 6 with KVM on IBM System x). The Common Criteria is a set of standards used by governments and other organizations to evaluate the security of technology products. KVM is also included into the Linux source code development tree, rather than being a separate add-on, so it is fully tested and integrated.
Underscoring its performance, KVM holds the top seven virtual machine scores on the SPECvirt benchmark. Virtualization is also the foundation of cloud computing and open virtualization, in turn, supports open clouds. For cloud service providers, cost efficiency of KVM is also critical so that they can provide their services at reasonable rates. Since customers don’t care about the hypervisor on the back end but are mainly concerned with a high level of service and affordability, KVM is being adopted by cloud service providers.
Notable advances have also been made in management solutions that enable KVM to be controlled alongside other virtualization technologies like VMware. IBM is investing significantly in KVM development and has developed a range of solutions to support it within its own solutions. For example, IBM Systems Director VMControl enables the simplified management of virtual environments across multiple virtualization technologies and hardware platforms, and IBM SmartCloud Entry is a self-service portal for the cloud end-user that complements IBM Systems Director VMControl. Additional IBM offerings with KVM support include IBM SmartCloud Provisioning - which provides high-scale provisioning on heterogeneous hypervisor and hardware platforms - as well as IBM Tivoli offerings, IBM SmartCloud Enterprise, IBM PureSystems, IBM System x, and IBM zEnterprise zBX Blades.
KVM Center of Excellence in Beijing is Logical Next Step
Creating a KVM Center of Excellence is the logical next step in IBM’s journey to support and promoting open source technologies.
In 1999, IBM threw its support behind the Linux operating system, and since then, IBM has been a tireless supporter of Linux and open source technologies, contributing financial and technical assistance, and as an extension of that commitment, has also become and dedicated member of the KVM community. The open source ecosystem is constantly expanding, providing a rich assortment of solutions to benefit the community at large. Organizations have been launched to advance open virtualization, supported by industry leaders, including IBM and its partners.
For example, there is the Open Virtualization Alliance (OVA), a marketing alliance focused on promoting KVM and open virtualization technologies. As of May 2012 the OVA had more than 250 members, including IBM as one of the governing Board Members. The oVirt project is another open source community, formed by IBM, Red Hat, SUSE and others. It is committed to establishing a development community around an integrated virtualization platform that offers advanced virtualization management tools to manage the KVM hypervisor. And, earlier this year, the independent OpenStack Foundation was also founded to promote the development, distribution and adoption of the OpenStack cloud software. IBM is a platinum member of the OpenStack Foundation which aims to serve developers, users, and the entire ecosystem by providing a set of shared resources to grow the footprint of public and private OpenStack clouds. Even though OpenStack supports multiple hypervisors, according to IDC KVM is the unofficial reference standard today with over 95% of OpenStack deployments using KVM today. And of course, there is The Linux Foundation, the non-profit organization dedicated to accelerating the growth of Linux, of which IBM is a platinum member.
Alongside this first IBM KVM Center of Excellence, there are over 20 KVM developers in Beijing who contribute technical expertise. The KVM Center of Excellence in Beijing is open for business and we encourage clients, partners and IBMers to come to the center to learn about the open virtualization choices available and particularly about KVM so they can make smarter decisions about the technology their infrastructures will rely on.
Jean Staten Healy - Director, Worldwide Linux and Open Virtualization, IBM
Virtualization+IBM 2700039S5C 4,267 Views
IBM PureFlex Systems hide complexity while also helping customers avoid virtualization vendor lock-in.
Hiding complexity – it has become the mantra of technology providers. As customers’ IT resources continue to be stretched, staffs are asked to do more, and budgets remain flat or growing slightly, the demand for systems and interfaces that make operations simpler is increasing.
We understand that – and with this customer requirement in mind, IBM created the PureFlex System. However, at IBM we still think that there are certain choices that should not be taken out of customers’ hands because it will limit their agility now – and in the future.
What is PureFlex?
The PureFlex System is one part of the IBM PureSystems family of offerings that IBM announced back in April. PureSystems offer clients an alternative to current enterprise computing models, where multiple and disparate systems require significant resources to set up and maintain. In particular, the PureFlex System enables organizations to more efficiently create and manage an infrastructure. In a sense, the PureFlex System provides the most basic set of compute elements – bringing together server, storage, and networking, as well as management and virtualization in one integrated offering. The result is that clients can start their deployment journey with much already done for them by the factory at IBM. With the built-in management node that we have in PureFlex, as well as how all the compute, storage and network components fit together, we are answering a clear need in the market.
Think of it as infrastructure ready to be used as a service that also includes with it an Infrastructure-as-a-Service (IaaS) private cloud management software, IBM SmartCloud Entry, so you can stand up an IaaS private cloud as well.
But just because we are packaging up the pieces for easy deployment, it does not mean that we are taking away choice and flexibility from customers to sculpt the system to fit individual needs and make adjustments later as needed. Far from it.
Deployment Choice with Room to Grow
PureFlex is designed to be deployed in a variety of sizes and scales so it can be used by large enterprises or mid-size customers. PureFlex comes in three flavors - Express, Standard, and Enterprise, which are starting points in terms of the size infrastructure customers want.
Of course, you can always add capacity and scale and grow. And there are mechanisms to pay as you grow, both from a hardware perspective and from the cloud or service aspect. Essentially, it is designed for and is targeted at a broad swath of customers - not just the mid-market or large enterprise. As the name implies, flexibility is in the PureFlex System’s DNA.
Choice of Hypervisor, Architecture, Operating Systems: Choice in the Same Platform
In terms of the virtualization environment that you can get within the PureFlex Systems, there is choice there as well. You certainly have the x86 virtualization environments, KVM, Microsoft Hyper-V, and VMware’s vSphere. The PureFlex line also includes compute nodes that are based on the IBM Power CPUs, so the virtualized environment in that case is based on PowerVM as the hypervisor. The result is that you get multiple hypervisors, multiple CPU architectures, and also therefore multiple operating systems that are supported within the same platform.
We are hearing from clients, from analysts, and other sources that multiple x86 hypervisors are more frequently being deployed within the same data center. In fact according to a recent study of 345 IT professionals by the Gabriel Consulting Group, almost half of the respondents said they were using two or three hypervisors, and 18% were using four or more hypervisors. “Hyperversity” as Gabriel put it, is increasingly the choice. With the mixture of hypervisors becoming more common, any platform that can enable that and provide a common user experience like PureFlex is designed to do, provides a lot of advantages. Increasingly, customers are rethinking what best suits their needs and their requirements.
With PureFlex, each compute node uses a specific CPU architecture, and then on that CPU architecture, a specific hypervisor – but multiple CPU architecture and hypervisor choices can be inside a single chassis. That enables them to share the networking or the shared storage and also the management - both for hardware and also for virtual resources - that are in the chassis as well as that are in the Flex System Manager node which provides multi-chassis management. Things that can be kept common are kept common and then per compute node you can have a different virtualization environment.
The appeal for KVM comes from its performance, security, and other advantages. For example, particularly in an integrated system such as PureSystem, where the complexity is hidden from the customer, we are able to integrate KVM more completely with the PureFlex system than we can for Hyper-V or vSphere. KVM is part of Linux and as a result IBM has access to the KVM source code, an IBM development team contributing to KVM, and a relationship with Red Hat that allows us to customize the build.
Our customers want the ability to change hypervisors. With PureFlex they can start out with one hypervisor and migrate to a different hypervisor without reconfiguring the system, including the management infrastructure.
This is relevant because although customers want the simplicity of an integrated system, they may need customization for particular workloads, not a cookie-cutter approach. PureFlex provides the ease of use that is required, but still enables choice on a range of levels to provide flexibility – now and in the future.
Jean Staten Healy
Director, Worldwide Linux and Open Virtualization, IBM
Virtualization+IBM 2700039S5C 15,803 Views
The open source hypervisor KVM is well known as a hypervisor for Linux workloads. However, many folks assume KVM does not run Windows workloads, or at least does not run them very well. In truth, KVM is well suited for both Linux and Windows workloads, both from a technology and a user perspective. And, when moving into the cloud where lower cost, scalability and security are primary concerns, KVM provides a compelling case as mixed workload hypervisor.
Still, some people cling to the incorrect assumption that KVM can’t handle Windows. They couldn’t be more wrong. Here’s some background followed by a break down of the key issues:
Avi Kivity created the KVM Hypervisor in the mid-2000’s to run Windows in a Virtual Desktop Infrastructure (VDI) product. Most people are surprised to learn that KVM’s heritage is to run Windows desktops. Windows (both desktop and server) are today and have always been supported by KVM as first-class workloads. This support begins with KVM developers, who test their code with Windows workloads before merging it into upstream KVM repositories.
Red Hat, the leading (but far from only) distributor of KVM, has a mutual certification and support agreement with Microsoft for Windows running on KVM. Red Hat customers who run Windows VMs may receive full support from Microsoft for problems in their VMs.
Features that Benefit Windows Workloads with KVM
Windows memory management techniques allow KVM to optimize memory use on the host especially safely and efficiently for Windows guests. KVM’s capabilities in this regard compare favorably to other hypervisors according to our internal testing. The Kernel Shared Pages (KSM) feature de-duplicates identical memory pages on a host. The Windows Operating System “zeroes” its memory pages when booting, and KSM de-duplicates those pages by mapping them to the host kernel’s “zero page.” With the first enterprise release of KVM several years ago KSM proved so effective with Windows that we mistakenly assumed we would not need any other memory management technique for guests.
KVM provides Windows device drivers for network and disk. These drivers provide high-performance I/O throughput for Windows guests. They are analogous to VMware’s VMTools.
Hardware virtualization support is making KVM yet more effective at running Windows guests. Recent Intel processors have a Pause Loop Exit (PLE) feature that mitigates a performance-killing condition called guest pre-emption that disproportionately effected multi processor Windows guests. We should expect more of this in the future. Processor vendors pay special attention to Windows performance issues.
The advantage in costs provided by KVM is clear, according to a price comparison of KVM vs. VMware and Microsoft Hyper-V done by IBM’s David Hsu, KVM (as provided by its most expensive distribution) is always less expensive than VMware whether you are virtualizing Windows or Linux servers. Hsu’s analysis shows that for Linux-only-based environments there is a strong case for choosing KVM, and for mixed workload environments as well, KVM provides a strong incentive to cost-conscious companies to avoid VMware or even Hyper-V. In a Windows-only environment KVM may be more expensive than Hyper-V, but VMware is always the most expensive choice no matter what the scenario.
A Windows-Like Administrative Interface for a Locked-Down Hypervisor
RHEV-H from Red Hat (and its Open Source counterpart at ovirt.org) provide a stand-alone, locked-down hypervisor with a Windows-like administration toolset. (Some find the concept similar to VMware ESX). In addition KVM has a Windows-friendly management infrastructure called RHEV-M with an “upstream” open-source counterpart at ovirt.org) These tools are ideal for KVM users who don’t have (or desire) Linux administrative experience.
RHEV-M is available from Red Hat as a server or desktop VDI infrastructure. We have customers who are using both versions of RHEV-H/RHEV-M successfully. It shows that KVM has a form factor and a management infrastructure that is identical to and competitive with VMware ESX. No Linux skills required.
When reviewing KVM’s stellar SPECvirt benchmark scores many folks assume that KVM’s performance for Linux workloads does not transfer to Windows. In fact, KVM’s performance and scalability are roughly the same for Windows and Linux workloads, especially with hardware that supports the PLE feature, which as been available in servers since 2011. The linear scalability we see with the published SPECvirt is evident in our tests with both Linux and Windows workloads.
KVM offers the same security features that VMware has - plus additional others, including Mandatory Access - a feature that provides security between virtual machines and is enabled through SELinux. This is a feature that is not universally available. It comes with KVM and it applies to Windows workloads as well as to any other workload you can run with KVM. So, even if you have got a pure Windows environment, that would be a benefit that KVM would provide over any other choice of hypervisor. This is of particular importance of course to cloud providers with multiple tenants or customers with workloads that must be securely separated. As a proof point of the significance of this technology, Red Hat Enterprise Linux 6 with the KVM hypervisor on IBM Systems was recently awarded the Common Criteria Certification at Evaluation Assurance Level 4+.
KVM and Windows Applications at Work
There are many examples of KVM doing some heavy lifting at the enterprise level, supporting Windows workloads. IBM SmartCloud Enterprise , a software as a service offering, uses KVM for both Windows and Linux workloads; the New York branch of a large Chinese bank is using KVM to virtualize Windows servers; and Open Virtualization Alliance (OVA) member company Abyres has transitioned several Malaysian government agencies to open virtualization environments with KVM running Microsoft Windows applications such as Exchange.
Virtualization and the cloud caught on in the first place because of concerns about cost. After making the decision to go that route, the choice isn’t over, and increasingly, KVM provides a competitive, cost-effective option with the features enterprises need for x86 virtualization – for Windows workloads as well as Linux.
IBM Distinguished Engineer & Chief Virtualization Architect, Open Systems Development Software Architect
Whether you’re just testing the waters or have already dove head first into server virtualization, cost reduction continues to be a primary driver of data center optimization. Although virtualization is helping to reduce capital expenses, IT managers are getting stung with increasingly expensive virtualization software licensing costs.
One of the key value propositions for KVM is the potential to reduce the overall cost of ownership for virtualization solutions. To assess the potential savings, you can do a pricing analysis of a KVM stack compared to its competitors (i.e. VMware and Microsoft). Sounds easy, but unfortunately it’s more complex than it seems because virtualization software vendors have different pricing terms and conditions, software is rarely sold at list prices and each data center is configured differently with various platforms, workloads, etc. to account for.
There isn’t a standard formula for calculating or comparing competitive hypervisor stacks, but one way to view the cost differences is through an analysis of publically available pricing information. Although this comparison won’t give you the exact cost differences for your particular environment, it will show you the overall magnitude of cost differences between the hypervisor solutions and how each solution is priced.
Pricing Analysis Methodology
The next step is to identify the hypervisor stacks to compare, and determine the price and licensing structure for each solution. The figure below shows how we’ve isolated the hypervisor and hypervisor management solutions along with their associated license, subscription and support prices. To get as close to an accurate comparison, we included the server, operating system, hypervisor, hypervisor management tool and systems management tool for each stack. You’ll see that the KVM stack utilizes the version of KVM that comes with Red Hat Enterprise Linux combined with IBM management tools - IBM Systems Director and IBM Systems Director VMControl.
Pricing Analysis Results
Here are the results for Scenario #1: 100% Linux-based workloads including the detailed pricing data:
Following the same methodology, Scenario #2: 50% Linux-based / 50% Windows-based workloads results were similar:
Conclusion and takeaways
Now in terms of the pricing analysis, the results above show:
As you consider your virtualization needs, the bottom line is that KVM can help protect you from the sting of increasingly expensive virtualization licensing costs especially for Linux and Mixed workload environments.
Pricing Data Sources:
Virtualization+IBM 2700039S5C Tags:  hat blog linux hypervisor red kvm x rhev results system performance virtualization ibm rhel 12,123 Views
This blog was originally posted by Red Hat here.
Red Hat & IBM Performance Teams
Red Hat is excited to announce today that the Kernel-based Virtual Machine (KVM) hypervisor, which is incorporated in both Red Hat Enterprise Linux and Red Hat Enterprise Virtualization, has again achieved top performance results. This latest performance mark was achieved on the IBM® System x3850 X5 host server with Qlogic® QLE 256x Host Bus Adapters, Red Hat® Enterprise Linux® 6.3 hypervisor and Red Hat Enterprise Linux 6.3 guests. During testing by IBM, KVM demonstrated its ability to handle I/O rates at the storage performance levels required by enterprise workloads, with four guests handling more than 1.4 million I/Os per second (IOPS). The results are further proof that virtualized workloads can maintain consistent high performance as compared with baremetal deployments.
The relationship between the hypervisor and its Linux kernel allows it to run on a dual design, unifying the host and hypervisor modes. Red Hat Enterprise Linux supports multiple virtualization use cases, allowing customers to choose when and where to use virtualization. By leveraging the Linux operating system, KVM virtualization overhead is minimized, but not to the detriment of performance. The Red Hat Enterprise Linux 6.3 release also supports up to the leading 160 virtual CPUs per virtual machine, allowing even large workloads to be virtualized.
These tests, run on the Red Hat and IBM technology combination described above, have demonstrated that enterprise workloads can be efficiently migrated into a virtualized environment while still delivering high performance results. The KVM host server, consisting of an IBM System x3850 X5 with four Intel Xeon® E7-4870 processors (sockets) and 256 GB of memory, ran on a storage back-end capable of delivering at 1.4 million IOPS.
Single and multiple virtual machines were tested, using Red Hat Enterprise Linux 6.3 on all guests and on the host. Both reads and writes were included in the test workload in order to more accurately simulate the demands of an enterprise workload. Using only four guests, KVM was able to achieve up to 1.4 million IOPS for random I/O requests of 8KB in size and more than 1.6 million for random requests of 4KB in size. The KVM performance matched the physical operating system performance of this setup and KVM was bounded by the test storage back-end performance. Using a single guest, KVM was able to achieve about 800,000 IOPS for random I/O requests of 8KB in size, and more than 900,000 IOPS for random requests of 4 KB or less. It should be noted that VMware recently indicated that it could achieve one million IOPS for a single host running six virtual machines running on a vSphere™ 5.0 host.1
Average latency rates for both tests remained low and constant across different I/O request sizes, demonstrating that block I/O performance on KVM can remain predictable, even with a changing number of guests. As the number of guests and I/O requests increases, block I/O performance on the KVM hypervisor is able to scale to match demand load.
Red Hat Enterprise Virtualization for Desktops and Servers is the first enterprise-ready, fully open source virtualization platform. Red Hat Enterprise Virtualization offers industry-leading performance and scalability for real-world enterprise applications including Oracle, SAP and Microsoft Exchange, and includes enterprise virtualization management features such as live migration, high availability, load balancing and power saving. Because Red Hat Enterprise Virtualization is available through Red Hat’s software subscription model, users benefit from lower acquisition ownership costs for the same or better feature set when compared to other solutions. The platform recently entered beta for its upcoming 3.1 release.
Because Red Hat Enterprise Virtualization and Red Hat Enterprise Linux incorporate the same KVM hypervisor, those systems using Red Hat Enterprise Virtualization are gaining the same virtualization technology that achieved the top performance posted by the Red Hat Enterprise Linux KVM and IBM systems used for this performance trial.
To read more about this top virtualization performance result from Red Hat and IBM, read the full performance brief here: https://access.redhat.com/knowledge/refarch/2012-red-hat-enterprise-linux-kvm-hypervisor-io-achieving-unprecedented-virtualiza.
Virtualization+IBM 2700039S5C Tags:  linux zerto mike x86 security gluster vfio kvm vmware blog hatsonas day ibm openkvm storage red hypervisor virtualization 11,846 Views
Enterprise adoption of KVM is growing, and KVM features are continually being updated and expanded. Development in KVM is focused not only on high performance – the must-have for enterprise adoption – but also on support for application developers and Systems Administrators storage, usability, high availability, disaster recovery, and security. As an active participant in the KVM development community, IBM continues to dedicate its considerable expertise to open virtualization with KVM. (Learn more about the IBM KVM commitment.) Here is a look at some of the KVM features we expect to see in upcoming enterprise Linux releases – and why they will matter to enterprise users.
Support for Application Developers and Systems
Why it matters
Why it matters
There are four things about KVM FS that are important. The first is that it is integrated so you don’t need to install it and clustered file systems are typically really difficult to install. Second, it is designed to serve up virtual disk images - it knows that it is working with virtual disk images that represent a virtual machine and it treats them that way specifically. When you back up a file, it knows that you are backing up a virtual machine and it knows that you need to do something like take a snapshot of the virtual machine and then back up the base image. Third, it also allows you to migrate that virtual machine and have access to the virtual machine image file. That is why it’s so notable and such a good feature. And fourth, VMware has a feature called VM FS which is integrated clustered file system that does the same thing as KVM FS, so KVM FS is significant because it will give KVM something that is directly analogous to VMware’s VM FS.
Usability and Device Support
Why it matters
High Availability and Disaster Recovery
Why it matters
The specific feature on the way is called a “static root of trust,” and it is the first step in Trusted Computing. It means that the first thing you do is validate the boot block to make sure it has not been tampered with, and then you validate the boot loader - and if the boot loader is good, it validates the kernel that it boots. And then, at that point you can validate other software that you load, extending the trust chain. The reason it is static is that it has to start at boot up and you can’t re-establish that chain of trust until you boot the machine up again.
Why it matters
Virtualization+IBM 2700039S5C Tags:  jollans ibm adam kvm gabriel consulting hyperversity 7,809 Views
As we approach VMware’s annual VMworld 2012 Conference in San Francisco at the end of August, conventional wisdom says that customers are standardizing on a single x86 hypervisor for their IT infrastructures.
But conventional wisdom may well be wrong. A new report published by Gabriel Consulting Group shows a remarkable diversity in the x86 hypervisors used in practice by IT departments. Nearly half of the 345 IT professionals surveyed were using two or three hypervisors, and a remarkable 18% were using four or more hypervisors. Hypervisor diversity – or “Hyperversity” as Gabriel terms it – is the majority choice.
The report is based on Gabriel’s annual and independent x86 Data Center Survey. What is different about this survey is that it reaches the IT professionals who work in data centers, rather than the CIOs. Remember the early days of Linux – CIOs were unaware of how much Linux was being used in their infrastructures, while the systems administrators were rapidly installing more and more Linux in order to cut costs and improve performance. The same may be happening here.
Read all about the reality of x86 virtualization adoption in the new Gabriel Consulting report ‘Hyperversity’ Rages On’.
Which hypervisors are being used?
However, what’s surprising is the level of usage of the other hypervisors, and how the number of customers standardizing on KVM and Hyper-V is growing.
Microsoft’s Hyper-V is the second most commonly used hypervisor, with 40% of customers surveyed using it for some systems. With the large installed share for Windows Server and the coming transition to Windows Server 2012, this is probably to be expected.
KVM is now in third place, with 33% of customers surveyed using it somewhere in their organization, closely followed by both the Citrix and Oracle flavors of Xen. Customers are clearly evaluating open source hypervisors and using them for tactical and point solutions, just as they did with Linux some years ago.
Where the real change has occurred is in hypervisor preference. KVM standardization has doubled from 3% to 6% compared to two years ago, and Hyper-V standardization has shot up from 3% to 8%.
No longer is VMware the only game in town.
What does hypervisor diversity mean?
Firstly, it means that the x86 hypervisor market is much more like the server market than the desktop market. Customers will have choices and there won’t be just one dominant hypervisor. Competition will drive innovation, lower costs, and enable customers to avoid vendor lock-in.
Secondly, changes in the way IT is delivered are likely to be disruptive and reshape the market. We’ve seen how the rapid emergence of smart phones and tablets has resulted in a different set of leaders from the desktop PC market. In a similar way, the emergence of cloud computing, integrated servers and hybrid systems may well have a big impact on the x86 hypervisor market.
Lastly, hypervisor diversity has enormous implications for the virtualization management and cloud infrastructure software that layers on top of the hypervisor. Customers will need management tools that are able to manage multiple hypervisors, rather than just a single one. VM mobility will need to support moving between hypervisors, driving open standards and interoperability. And cloud software will need to intrinsically support a range of hypervisors.
And who will be the winners in all of this?
Customers, of course. Choice is good.
Virtualization+IBM 2700039S5C Tags:  beavers smartcloud entry virtualization kelly linux bladecenter ibm cloud kvm preflex 6,598 Views
How to avoid headaches and risk by using IBM Systems Director VMControl and SmartCloud Entry
When organizations start thinking about embarking on a cloud deployment, they see the advantages of a utility-like model and the appeal of something such as Amazon’s EC2 public cloud offering resonates strongly with them. However, according to the analysts, a primary concern about using public clouds is security, since most people have heard the horror stories about outages and data leaks. And so, despite the ease-of-use of the public cloud, the attraction of private cloud is that clients get the same user experience but it is all safely built inside of their own firewall - using their own resources and their own IT infrastructure, thereby eliminating what they perceive as the biggest risks with a public cloud.
But what often gets overlooked in the evaluation of public vs. private cloud is that when anyone enters into an agreement to use a public cloud, they never have the headache of looking after the infrastructure. They basically pay for the workloads they want and get billed as they use them and that is all they have to do. The cloud provider, whether it is Amazon or someone else, has an army of people who are in charge of looking after the hardware that is running that cloud: keeping it properly functioning, managing it and patching it - and there is a lot of work involved in doing that. When a customer decides to implement a private cloud model, they inherit that maintenance headache - along with everything else that comes with being responsible for a cloud infrastructure.
As a result, when you think about implementing a private cloud, it is also necessary to think about how you will manage not only the virtualized resources but also the underlying physical infrastructure to guarantee service delivery and adhere to SLAs. Where IBM has a significant benefit over competing private cloud software providers is that we also deliver the platform management to help clients look after the physical infrastructure - and none of our competitors do that.
SmartCloud Entry is a thin layer of software that overlays IBM Systems Director and IBM Systems Director VMControl. Those products provide you with platform management and the virtualization management. SmartCloud Entry adds cloud capabilities as well as a simple self-service Web portal enabling end users to provision their own workloads without involving the IT or systems administrator. It also takes care of metrics on the back end to track who is using which workload for how long so that the IT team can then “bill” people as they use workloads, providing a method to move to a pay-as-you-go or utility model as opposed to the traditional route of having to pay capital expense for provisioning new hardware and software. And, it also pushes the burden of provisioning (or creating a new virtual machine or workload) to the individual so it becomes a self-service IT infrastructure instead of having those requests become backlogged tasks for IT administrators.
With something like vCloud from VMware, clients get a cloud management layer but nothing to help diagnose or fix the underlying physical infrastructure if anything goes wrong – whereas, IBM offers that as an inherent part of SmartCloud Entry. We give you the tools you need to manage, diagnose, and repair the physical infrastructure to keep a cloud up and running and that is a major benefit of SmartCloud Entry.
Currently, PowerVM and VMware are supported by SmartCloud Entry, with support for additional hypervisors to be added in the future. There is a very high level of interest in KVM. We have large numbers of customers that are clearly looking to move to open source and KVM in a big way including their approach to delivering those workloads through the cloud.
Why KVM and Cloud?
Just think about what happens when organizations are using something like Amazon, a public cloud. In that scenario, they don’t know or care what the infrastructure is underneath so long as it works and the workloads are available. Adding this cloud layer of abstraction provides a great opportunity for alternative technologies like KVM. If the technology presents a great value proposition to the operators of the cloud in terms of both costs and the features that is offered, it gives them a good way to accelerate their deployment in large organizations.
Customer Reduces Server Count by 80% with IBM Systems Director Software and KVM
The partner also helped the client implement IBM Systems Director VMControl Enterprise Edition V2.4 software to manage the virtualized environment and provide advanced levels of usability and visibility. By implementing IBM System x servers virtualized with KVM hypervisor technology and running IBM Systems Director software, the client consolidated the work of 57 previous servers into just nine System x servers - a server reduction of 84%. As a result, the client significantly reduced electricity and space consumption in its data center, greatly simplified IT administration, and gained the scalability to accommodate ongoing business growth.
The Pieces Add Up
The performance improvements in upstream KVM development that are likely to make it into the next Linux releases from major distributors will be especially beneficial to enterprise KVM users. IBM's contributions to the KVM hypervisor are consistent with its longstanding commitment to Linux and represent a broad strategy of providing customer choice, in order to bring open technology to key segments of the technology market, and to enable IBM platforms, middleware, and services to have the best hypervisor technology available. Learn more about the IBM KVM commitment here.
Here’s a look at three notable performance improvements coming in future enterprise Linux releases.
More Efficient Virtualized Storage - Many of us are anticipating the re-worked VirtIO block driver, which increases performance of paravirtual block storage. Part of the rework includes hosting the hypervisor portion of the block driver in the Linux kernel. This approach has already proven successful with the paravirtual networking driver.
Improved Memory Access Speed - And then a second performance improvement is called AutoNUMA (automatic non-uniform memory access). Every computer these days is a multicore computer. Every desktop, laptop, and server is a multicore computer - even an iPad is a multicore computer and this means that many of these have non-uniform memory access. (They have memory zones where one bank of memory is attached to one processor and another bank of memory is attached to another processor.)
Better Small Packet Performance - Multi-queue support for VirtIO networking will allow more concurrency and reduced latency in paravirtual networking. It specifically increases performance for small packets - which has been a weakness.
Virtualization+IBM 2700039S5C 2,446 Views
Ask any CIO over the past two years and they will likely tell you that virtualization has been a top IT project in their organization throughout that time span. The promise (and realization) of greater IT efficiency via server consolidation and capital expense reductions has propelled server virtualization and hypervisors to the enterprise mainstream in adoption. In fact, some clients are now hitting a wall with server virtualization because they have consolidated all of their “low hanging fruit” workloads, such as IT infrastructure, web servers, and email and collaboration. IDC estimates that nearly two-thirds of all IT workloads will be virtualized by the end of the year1. Where do CIOs go from here?
The answer for the vast majority of clients is….more virtualization. As a core IT technology, virtualization has far-reaching, and lesser-known, implications beyond servers and compute. One example of this reach is how the principles of virtualization (i.e., the logical representation of physical resources) can be directly applied to other datacenter elements such as storage and networking for resource consolidation and pooling. Another example is how server virtualization impacts the rest of the datacenter in terms of provisioning resources, optimizing performance and managing service levels. This impact becomes very clear to clients who are beginning their cloud computing journey and haven’t planned ahead with modernizing their datacenter. The bottom line is that IT needs a comprehensive approach to virtualization, to include solutions that span servers, storage, networking, management and application infrastructure. At IBM, we call this approach Advanced Virtualization.
CIOs and IT leaders need to expand their virtualization expectations beyond increasing consolidation and reducing capital costs. Advanced Virtualization yields additional benefits by ensuring cost-effective business availability and transforming application support. These expanded benefits are essential to clients working to virtualize their complicated and mission-critical workloads, such as ERP systems, OLTP applications and business analytics solutions. Leading organizations are virtualizing their entire datacenter in order to maximize business agility, lower operating costs, and divert resources to new projects for innovation.
We think virtualization is more than you’ve probably been led to believe. Please join us on June 20th at 1:30pm ET as Scott Firth, Director of IBM Virtualization Marketing, shows how enterprises are investing in virtualization beyond hypervisors and servers. With over 45 years of virtualization expertise, IBM provides an end-to-end approach to virtualizing your enterprise. See how our clients are profiting from our industry leadership.
IBM Virtualization Strategy Leader, IBM Systems Software
1) IDC Market Analysis Perspective: Worldwide Datacenter Trends and Strategies 2011
Virtualization is all about sharing resources of IT systems to get better usage out of them — to lower costs and to make it a more productive environment, with a better return on investment. Of course, you need a sufficiently large server with enough power and capacity so it is worth sharing the resources between different workloads. But once you have decided to virtualize, a key issue to consider is how the source code for the virtualization hypervisor is developed. That brings us to the importance of open source hypervisors, and in particular, KVM (the Kernel-based Virtual Machine).
KVM stands out from other virtualization technologies because it is based on Linux, and therefore benefits from the thousands of programmers in the open source community that has contributed to Linux. A key aspect of KVM and its development communities is that there is one code stream that all developers are focused on, so that it does not become splintered as happened to Xen, an earlier open source virtualization project.
KVM offers a high level of scalability, and this has to do with the fact that it is building on the scalability already inside Linux, with the result that it can support large memory, and many processor cores. In fact, the top seven SPECvirt benchmarks, which are the industry standard in terms of virtualization benchmarks, all use KVM — whether they have been done on IBM or on HP hardware. Recently, Red Hat Enterprise Virtualization on IBM System x (eX5) demonstrated 18% better virtual machine consolidation performance on a 40-core system versus the competition, due to more efficient virtualized I/O through the use of SR-IOV by both KVM and System x.
We also believe that the Mandatory Access Control security delivered by SELinux as part of KVM is a step above the Discretionary Access Control provided by other hypervisors. IBM and Red Hat also recently announced that KVM has achieved Common Criteria Certification with EAL4+ (for RHEL 5 with KVM on IBM System x). The Common Criteria is an internationally recognized set of standards used by governments and other organizations to assess the security and assurance of technology products.
We think KVM offers something to any environment but where KVM is being adopted especially fast is in four key segments.
First, if you already have Linux servers, because KVM is integrated already into the Linux kernel, it is a natural choice for virtualizing Linux servers, because you don’t have to do the testing and integration or purchase a separate product. KVM comes with Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and Canonical Ubuntu LTS as an embedded hypervisor.
Second, we are seeing both private and public clouds use KVM because of the advantages it provides in terms of virtual machine density, low unit costs, and advanced security – such as IBM’s SmartCloud Enterprise public cloud offering.
Third, we are seeing rapid adoption of KVM in growth markets. These markets haven’t virtualized as fast as others, but are catching up very quickly and they want to have the latest and lowest cost technology in order to virtualize. There is a great deal of interest in KVM fromChina,India, and Latin American countries at the moment, as shown by the global membership of the Open Virtualization Alliance includingHuawei,China’s largest telecommunications supplier.
And last, though certainly not least, we are seeing high interest in KVM from big enterprise customers who already have a large installed base of virtualized servers using hypervisors from commercial vendors, but who are now considering other hypervisors for their next set of servers. These organizations are looking to reduce expense and this is where the low cost of KVM really comes into play — because when you are a big enterprise with a lot of virtualized servers, the total amount you are spending on virtualization escalates very quickly.
Apart from cost, the key for these big enterprise customers is: Can they manage that virtualized environment from a single pane of glass; from a systems management approach, can they manage their existing virtual machines and also the KVM machines from one console? This is where the products IBM is developing, including IBM Systems Director VMControl with its support for multiple platforms and multiple hypervisors, really help to manage existing environments.
IT managers today are looking to save money and maintain vendor independence when virtualizing their workloads - while maintaining enterprise qualities such as reliability, scalability and security. Only KVM delivers all of this and we think it’s worth a closer look – visit www.ibm.com/systems/kvm to find out more.
Jean Staten Healy
The cost savings and simplicity of consolidation with the security and scalability of the mainframe
Here at IBM, we often smile when we hear some new buzzword around virtualization. The IBM mainframe has a long history of virtualization. We were doing it before there was a name for it, and System z’s flagship virtualization product, z/VM, is nearly 40 years old. But with all the talk about consolidation plays, virtualized workloads, and private clouds, we sometimes need to point out that System z also provides a level of sophistication, flexibility, security and cost-effectiveness that is simply unmatched.
From a virtualization perspective, System z is a highly attractive platform for eliminating server sprawl. We can - and do - run flat out at near 100% utilization. And, System z can commit memory to a scale that simply can’t be achieved with a bunch of commodity level x86 boxes. System z was meant to run many diverse workloads - and balance that work - from the very beginning. You may see a piece of commodity hardware starting to fail or having service problems at 50% or 60% utilization, but with System z, that just does not happen.
Adding to the many attributes of System z is the fact that the cost-effective open source operating system Linux is fully supported on the IBM mainframe. As a result, about 35% of our mainframe customers have an IFL (Integrated Facility for Linux) installed, and 30% of our total systems have Linux installed worldwide.
Using z/VM virtualization technology, clients can run hundreds to thousands of Linux servers on a single mainframe running with other System z operating systems, such as z/OS, or as a large-scale Linux-only enterprise server solution. z/VM 6.2 can cluster four virtual machines and move workloads between them while production is running live. That allows customers to avoid planned downtime for patch updates – and if there is a spike in demand for one workload they can move it to where the capacity is.
The flexibility of System z is expanded with zEnterprise BladeCenter Extension (zBX), an infrastructure for bringing Power (AIX) and System x (Linux and Windows) blades under the management of System z. Sometimes described as “a system of systems,” zEnterprise closes the gap between the mainframe and distributed worlds.
Organizations, for which failure is not an option, consistently rely on the IBM mainframe. That’s the reason that System z is the platform of choice for companies and government agencies running mission-critical workloads.
For example, Nationwide Insurance has deployed two IBM System z mainframes running Linux as a cornerstone of its strategy of moving all new development to virtualization and J2EE as a means of "future-proofing" its IT environment. And, when Endress+Hauser, a Switzerland-based specialist in measurement technology for process engineering, needed to improve its disaster recovery capabilities and also reduce total cost, it also turned to System z.
The IBM mainframe’s refrigerator-size, energy-efficient footprint can consolidate huge numbers of servers into one, allowing organizations to not only save money but gain the security, flexibility, and dependability of System z at the same time.
The Value Equation for PowerLinux: Power + Linux + Virtualization = Scalability + Affordability + Flexibility
Virtualization+IBM 2700039S5C 6,874 Views
When we talk about PowerLinux, what we mean is industry-standard Linux available on IBM Power Systems servers. PowerLinux is taking the industry-standard Linux that everybody knows and loves – the enterprise distributions from Red Hat and SUSE – and enabling it to exploit all the capabilities of the POWER architecture. We at IBM then work with the community to further exploit the capabilities that we have with our Power architecture for virtualization. That is what makes PowerLinux unique – the work that we do to help industry-standard Linux exploit the capabilities around virtualization in the Power architecture. Here are some of the notable features about PowerLinux that make it a unique platform:
Strong support for virtualization, performance, scalability, flexibility, support for clouds, efficiency, reliability, security and availability – these are all attributes that add up to a strong value proposition for PowerLinux customers.
Virtualization+IBM 2700039S5C Tags:  powerlinux virtualization powervm ge healthcare watson data x86 power big linux 9,400 Views
On April 24, IBM announced IBM PowerLinux solutions, built on a new Linux-only family of servers. These two-socket servers – one rack-mount and one compute node – run industry standard Linux, and have been tuned to run key emerging workloads like big data and open source network infrastructure.
With these new offerings, IBM is taking Power’s well-known strengths in scalability, flexibility, and security and making them available to customers that might not previously have thought of Power as an option. Now, quite simply, IBM is taking pricing off the table.
The PowerLinux servers are smaller, entry-level servers that are ideally suited for scale-out type workloads which are very popular in the Linux space, and are focused on three main areas:
What makes PowerLinux ideal for these uses is that multiple workloads can be placed on a single server in a virtual environment, effectively avoiding server sprawl. Also, the POWER7 architecture with its multithreading capabilities is ideal for workloads that need to do many things at the same time.
Linux has really changed the economics of IT, and in these tough times many customers are turning the open source operating system in order to control costs. It also provides rich functionality as a result of its vibrant developer community. IBM continues to contribute and participate in the development of Linux, both financially and with a global team of software developers who are dedicated to its continued growth.
Though Linux and Power Systems were born at different times for different reasons, both have transformed the markets they serve over time. And despite their different starting points, they are now meeting in the middle, building the right combination of technologies for mid-market companies. Linux now offers affordability and a rich set of capabilities to make it “enterprise ready” and Power, which inherently exploits virtualization, has also become more ubiquitous and more affordable.
As a result, this new Linux-only offering from IBM can enable mid-market customers to gain the superior capabilities of IBM Power for about the same investment they would make for an Intel- or AMD-based solution, if not significantly less. In addition to costing less, PowerLinux solutions can perform faster, scale better, and stay up longer than competitive x86 products. When you think about the expertise and capabilities that IBM brings to bear by combining the widely used open source Linux operating system and the very affordable new Power product line, this is a very compelling value.
It’s time to have a conversation about what PowerLinux provides.
Interested in learning more about PowerLinux?
I am thrilled that this week we released SUSE Linux Enterprise 11 Service Pack 2. It's a big step forward for us at SUSE. This is the first major product update since we started operating as an independent business unit within The Attachmate Group. It's also a good indication of our commitment to the SUSE Linux Enterprise platform. As you can see, the acquisition didn't interrupt development at all. Quite the opposite; it means we can now focus 100% on the products and innovations that matter most to our SUSE customers.
SP2 is packed with new features and capabilities that I hope you'll take the time to explore (after you read this post, of course). SUSE Linux Enterprise 11 SP2 is the first release built using a new forward-looking development model that combines modern Linux kernels, consistent libraries and interfaces with a unique, forward-porting approach. It means you gain faster access to open source innovation, and the ability to use the latest hardware, without losing enterprise quality and application compatibility. Truly a win-win.
One area that's been significantly enhanced is virtualization. In addition to new support for Linux Containers, SP2 includes updated Xen and KVM open source hypervisors. Both SUSE and IBM have worked together to provide cross-platform support and choice for customers with mixed IT environments. We’re proud of the leadership and support we’ve demonstrated for KVM and Xen. Since 2010, SUSE Linux Enterprise has provided commercial support for KVM. Today, our collaboration with the Linux community and our strong partnerships enable SUSE and IBM to deliver an open standards-based solution for one of today’s most requested features – KVM support for Windows guests.
SP2 is a particularly important release for IBM customers because it adds terrific performance-enhancing and power-saving capabilities. This will help IBM customers take full advantage of the native resiliency of IBM Systems servers and get even more benefit from server virtualization.
This is one more chapter in the 20-year partnership between SUSE and IBM. Our shared belief that open standards and open source software are absolutely critical to the future of IT is reflected in the latest version of SUSE Linux Enterprise 11. Take it for a test drive and see for yourself how well it performs. www.suse.com/promo/sle11sp2.html
Vice President - Global Alliances and Marketing, SUSE
As virtualization becomes ubiquitous in enterprises, it’s important to note that KVM in particular offers advantages that stem from its simple architecture. At its most basic, KVM is a feature added to Linux that allows Linux and other operating systems such as Windows to be virtualized - as opposed to a separate complete virtualized operating system à la the Xen model. This not only simplifies KVM, but allows KVM to leverage the world's largest and best development community (that's the Linux community in case you were confused....) instead of having to duplicate core operating system development work.
The Challenge of Virtualization and the Birth of KVM
Virtualization of course isn’t new - IBM has been shipping virtualization capabilities in servers since 1967 when we first released CP/CMS for the mainframe. But unlike the mainframe processor, the ubiquitous x86 processors were not originally designed to support virtualization. VMware was able to work around these issues using dynamic translation which is a fairly complicated technique. Meanwhile, AMD and Intel independently decided to add virtualization support to their hardware, setting the stage for a simpler way to virtualize on x86 computers. Both Intel and AMD added a new execution mode to their processors called non-root mode. Non-root mode allows the safe virtualization of untrusted operating system code with only the execution of a single instruction. What once took millions of lines of code to parse and translate arbitrary sequences of unsafe machine code to a safe version now takes just a single processor instruction. This enabled a radically different approach to virtualization where instead of building a hypervisor focused on machine code translation, virtualization support could be added as just another feature to an existing operating system, like Linux.
By the time those processors came out around 2005, VMware was gaining significant traction with their dynamic translation technology, but the only widely-deployed open source virtualization software was Xen. Xen is different architecturally from KVM as Xen is itself an operating system with virtualization capabilities whereas KVM simply adds virtualization support to an existing operating system – Linux. To avoid writing a complex machine code translator, Xen relied on modifying any guest that ran on top of it in order to co-exist with the hypervisor. This required a special purpose hypervisor that could co-exist with a specially modified guest. This special purpose hypervisor was too restricted to provide functionality such as device drivers which necessitated a special privileged guest (known as Domain-0) which happened to be Linux. This Domain-0 guest effectively is an extension of the hypervisor running at effectively the same privilege level. Simply put, a Xen solution has two operating systems and KVM has one. That difference provides two advantages for KVM - less development is required for one OS than for two; and it requires effort to keep two operating systems in sync (simultaneous patches, etc.) that isn't required for a single OS solution (KVM as a functional extension of Linux).In the summer of 2006, KVM was announced by Avi Kivity from a small startup company called Qumranet. From the beginning, Qumranet was focused on developing a VDI (virtual desktop infrastructure) product. One of the original founders, Moshe Bar, was a co-founder of XenSource, and Qumranet initially focused on using Xen as their target hypervisor. Avi had some experience with Xen but he thought he would have an easier time just adding virtualization support to Linux and a few months later, KVM was born.
KVM Grows Up Fast and IBM Support
IBM was a significant supporter of Xen for quite a while. Virtualization support on x86 is a critical need for then maturing (now matured) Linux server market. As is the case with open source in general, improvements emerge rapidly, based on the learnings of the past. In fact, it might be argued that the ability to learn and rapidly improve is one of the distinctive characteristics of the Linux development community that make it so powerful. In any case, it became obvious pretty quickly that KVM had the architectural characteristics necessary to become a virtualization solution with the performance, function, scaling, and robustness that the server market needed and had the simplicity of being an integrated kernel module instead of a standalone solution. Once that was clear, the IBM Linux Technology Center (IBM's Linux development team) made the decision for IBM to switch from Xen to KVM as our strategic open hypervisor. Of course we did (and do) continue to support our customers that deployed on Xen in the early days, but now we recommend all new deployments be made on KVM. There isn't anything wrong per se with Xen, but in the natural way open source works, a latter solution is now superior to an earlier one.That decision seems to have been borne out by the absolutely staggering rate and pace of maturation of KVM. KVM has gone from the vision of a single individual to a world-class, competitive hypervisor in just a few years. That is due in large part, once again, to the simplicity of having KVM as "merely" a kernel module, which enables those working on KVM to have 100% of their efforts result in improvements to KVM (leveraging the overall community for hardware support, device support, build, test, integration, etc.).
The KVM development community is large, diverse, and growing. My personal expectation is that the phenomenal growth in KVM capabilities will continue, that KVM adoption will grow sharply, and that KVM will emerge as the de facto open virtualization alternative. That is, at least, where IBM is placing its bet and where our folks are working.
Vice President, Open Systems and Solutions Development
IBM Systems & Technology Group
Interested in Open Virtualization?
Red Hat and IBM have worked closely together over the last 12+ years collaborating to bring better technology to our customers, specifically focused around Linux. About five years ago, our companies expanded our collaborative focus to include joint work on virtualization and, more specifically, on the Kernel-based Virtual Machine (KVM) hypervisor. Since then, we’ve closely partnered to drive open virtualization forward for the benefit of global enterprises.
Today, Red Hat releases Red Hat Enterprise Virtualization 3.0. The 3.0 platform brings a compelling balance of performance, reliability, scalability, openness, and cost advantages not offered by competing alternatives. With extensive updates to both the hypervisor and management system, Red Hat Enterprise Virtualization 3.0 includes over 1,000 features, enhancements, and improvements.
Red Hat Enterprise Virtualization 3.0 enters into a market where there is a strong demand for an open alternative to proprietary competitors. In the past several months, we’ve seen strong traction and incredible support for open virtualization technologies through organizations like the Open Virtualization Alliance (OVA) and open source projects like the oVirt project, in which both Red Hat and IBM have played a large leadership role. Adding to this, we also have thousands customers using Red Hat virtualization technologies across industry segments who have recognized the value of our offering and who see that Red Hat Enterprise Virtualization is a strong player in the emergence of the dual-source trend, in which companies are looking to use more than one virtualization solution to meet their needs. Over 80 percent of Red Hat Enterprise Virtualization customers are deploying Red Hat Enterprise Virtualization as an alternative side-by-side with VMware. In addition, approximately 50 percent of Red Hat’s largest customers based on revenue, including some of the world’s largest banks, telecommunications companies and government organizations, have begun deploying or piloting Red Hat Enterprise Virtualization.
In August, we released a downloadable evaluation for Red Hat Enterprise Virtualization and have since seen a 50x increase in the rate of evaluations. Going forward, we’ll have a fully supported downloadable trial program to help meet demand for Red Hat Enterprise Virtualization. Download the fully supported, 60-day trial here.
We don’t release Red Hat Enterprise Virtualization 3.0 today alone. Red Hat Enterprise Virtualization 3.0 debuts with the backing of long-time partners like IBM as well as our new Red Hat Market Place of ISV partners who have integrated their applications with the 3.0 product through its APIs.
Please join us in celebrating Red Hat Enterprise Virtualization by attending our virtual event today to learn more about this news and the future of Red Hat Enterprise Virtualization. If you aren’t able to join us live, the content will be on-demand until April 18, 2012.
By Navin Thadani, Senior Director, Virtualization Business at Red Hat
Virtualization+IBM 2700039S5C 4,294 Views
The clouds are rolling in. And, as more organizations move to evaluate and deploy cloud technologies, there are considerations to keep in mind to help avoid being buffeted in the future by winds of change across the IT landscape.
Virtualization is a step on the road to the cloud – whether it is a public or private cloud. In order to build a cloud you need to have virtualization underpinning it. Open virtualization can play a key role in maintaining flexibility for cloud deployments, and our clients telling us that you should have an open cloud so that you are able to move workloads from one cloud to another, and take your data and your applications and move them to a different public cloud vendor or between private or public cloud if you choose to do so.
As a result, we are seeing that cloud environments very often use open source hypervisors. For example, a modified early version of Xen has been used by Amazon, and IBM is using KVM –Kernel-based Virtual Machine – in its SmartCloud Enterprise, an agile cloud computing infrastructure as a service, and also in its biggest private cloud, the IBM Research Compute Cloud.
Taking an open virtualization approach enables you to have an open cloud, but there are also other factors. For example, if you look at the parameters in terms of cloud environments, one of the first requirements is to be able to deliver IT at low cost – an advantage closely associated with Linux and other open source technologies. In general, people choose to go with cloud rather than traditional IT because it will enable them to get their IT less expensively, whether it is a private cloud shared between people inside one organization or it is going outside, buying IT from a public cloud provider.
Cost – low cost – is important, and then, associated with that are the levels of scalability you can get and what you are able to achieve in terms of packing virtual machines on top of the same server. KVM, for example, offers high levels of scalability, building on top of the scalability provided by Linux. This is significant because again going back to cost, if you are able to get more virtual machines efficiently onto the same server, then you are able to lower the total costs of the cloud that you are then providing.
A third consideration for any IT deployment and – certainly those in the cloud – is security and that also connects back to KVM’s Linux connection. Since KVM builds on Linux, it is able to take advantage of the managed access controlled security that SELinux provides in a cloud, and this means you are able to have high levels – really military-grade protection – between virtual machines in the cloud from different organizations. And where you have multiple tenants of a cloud you are able to say with certainty that you can’t get from one partition to another partition – and that is a big benefit in terms of providing cloud environments.
No one can foresee all of the changes to the overall IT market – let alone, their own organization – that will take place in the years ahead, so keeping options open when it comes to critical IT workloads in the cloud is important to future-proofing your IT infrastructure. This is why IBM is investing in Open Virtualization – KVM.
Jean Staten Healey
Director WW Cross-IBM Linux and Open Virtualization
Virtualization+IBM 2700039S5C 2,322 Views
Linux Journal just released their 2011 Readers’ Choice Awards. I am very pleased to share in this blog that IBM is the winner in the “Best Linux Server Vendor” category, for the second year in a row.
Every year, Linux Journal invites its readership to cast their vote for their favorite Linux vendor. This year, over 20 server vendors were nominated for the “Best Linux Server Vendor” award including Dell, HP and Sun Microsystems. The awards are announced in the December issue of the Linux Journal.
IBM's win in this category is a testament to IBM’s long standing commitment to Linux. Eleven years ago, IBM announced a $1 billion dollar investment in Linux, taking the technology from a successful science project to a major force in business IT. Not only was this a turning point for Linux and the Linux community, it was also a pivotal moment in IBM's history. This investment was one of the first times IBM made a decision to embrace open source software and make it core to our business strategy.
Today that tradition continues. IBM is consistently among the top commercial contributors of Linux code as measured regularly by The Linux Foundation's "Who writes Linux" series. Linux also continues to be a fundamental component of IBM business --embedded deeply in hardware, software, services and internal deployment.
Recently, IBM again showcased it’s longstanding commitment to open source and virtualization, by backing KVM the Linux-based hypervisor. Kernel-based Virtual Machine (KVM) is the next step in the evolution of x86 virtualization technology. KVM is an open source hypervisor that provides enterprise-class performance, scalability and security to run Windows and Linux workloads. KVM provides businesses with a cost-effective alternative to other x86 hypervisors, and enables a lower-cost, more scalable, and open Cloud.
Linux evolved as a leading enterprise OS, thanks to the community of developers, and KVM will evolve for the same reasons. Since KVM is based on Linux, KVM takes advantage of the scheduler, memory management, power management hardware device drivers, platform support, and other features continuously being produced by the thousands of developers in the Linux community. This gives KVM a significant "feature velocity" that other virtualization solutions cannot match.
Celebrate with us as IBM wins this prestigious “Best Linux Server Vendor” Award.
Jean Staten Healy
Director, WW cross-IBM Linux and Open Virtualization
Virtualization+IBM 2700039S5C 5,172 Views
For those of us who have witnessed the emergence of open virtualization technologies such as KVM, the benefits of an open approach to server virtualization can seem obvious. But one of the things that KVM has suffered from is that it has not been well explained in the marketplace. It is actually quite a geeky name if you think about it – and then explaining what KVM stands for – Kernel-based Virtual Machine – doesn’t actually help to explain it much more! It would probably have been easier to attract attention if it had a cooler name but in fact it is a really good product and is now very much enterprise-ready. So we now need to get the word out about it – we need to talk about the benefits and we need to talk about why the hardware support for new virtualization hypervisors like KVM has come into the marketplace and talk about what people are doing with it.
“…you have a lot of people working on the same problem,
The term “open virtualization” means it is an open source product, that it is built by a community of developers from both vendors and individuals all working together to build the project in the same way as other projects such as Linux, Apache, and Eclipse and so on. The benefit of having an open virtualization approach is that you have a lot of people working on the same problem, you are able to bring a lot of brains to bear to build the system, and as a result, it tends to progress faster in terms of development. Like any other open source project, because you can see everybody else’s code, the quality can be very high because you are getting the peer review of the code. And, of course, since it is an open source project, everyone can take that project and use it as the basis of the products they build so, for example, companies like Red Hat, SUSE and Canonical can build their products on top of it.
The second important aspect to open virtualization is that you are able to move about the images of the virtual machines between different hypervisors so a virtual machine that runs in one hypervisor can run on another hypervisor. That is where a lot of the work has been done in the industry around the Open Virtualization Format through the standards body, DMTF (Distributed Management Task Force).
And the third critical component to the open virtualization approach is around open virtualization management. Are you able to manage a number of different hypervisors? Are there open interfaces to be able to manage different hypervisors from “a single pane of glass” or single interface? This is where we expect the new oVirt project to be very important (http://www.ovirt.org)
“...you can reduce costs and you get choice…”
The benefits of open virtualization are that you can reduce costs and you get choice – you are not locked into one vendor. But you need to look at all three of those streams –the code, the format of virtual machines, and the virtualization management.
The designers of KVM, in particular, developed it as a module that plugs into Linux, which of course is another open source project, and turned Linux into a hypervisor. The benefit of that is you are now able to take advantage of all the work that has already gone into Linux. Linux already has very good memory management, it already has very efficient process schedulers, it already has high levels of security for access control, and it has wide device support. You can take advantage of those features and the net out of it is that with KVM, you can have a high performance, very scalable, and very secure hypervisor.
To find out more, a great place to start is the Open Virtualization Alliance (http://www.openvirtualizationalliance.org), whose mission includes educating the market about open virtualization. Speaking of education, the OVA is organizing an educational webcast on December 8th called Understanding KVM as an Enterprise-Grade Solution. Register for this free webcast at: http://www.openvirtualizationalliance.org/news/index.html
Virtualization+IBM 2700039S5C 1,806 Views
Simplifying today's highly complex IT infrastructures to speed deployment of cloud implementations, achieve maximum utilization of data center resources while improving productivity are very important challenges facing today's IT staff, which are already stretched very thin. Add to this equation a diverse portfolio of compute, storage and networking platforms each with separate, disparate management tools, which don't communicate, and you have a situation which could bring even the highest performing IT departments and data centers to a grinding halt.
Many of our clients have stressed the need, stated simply, to simplify the complicated puzzle of systems management. The desire to improve productivity, reduce IT costs and "do more with less" while continually pushed to achieve higher levels of service seems to be at the forefront of IT professionals minds.
Diverse clients as Codorníu, Spain's leading producer of sparkling wine, B C Jindal an Indian business conglomerate, GHY a Canadian brokerage house or the Chinese City of Wuxi are each using IBM systems management solutions to address questions as - "how do we effectively deploy and manage resources in the Cloud?" "How do we rapidly install, configure deploy and provision resources quickly and easily?" "How do I effectively pool together resources to meet demand when and where I need them?" And, last but not least, "How do we manage heterogeneous platforms as a single unified entity?"
IBM has taken steps to make systems management of IT infrastructures simpler. IBM Systems Director is making it easier for clients to manage heterogeneous environments using a "single pane of glass" to automate discovery, monitoring and management of IT assets (servers, storage, network devices, energy, physical and virtual resources) and workloads. Previous to GHY International implementing IBM Systems Director their IT staff spent 90% of their time on server management and basic administration. Today, GHY International's staff spends approximately 5% of their time performing the same tasks which enabled faster time-to-value for strategic business initiatives! A GHY executive commented on IBM Systems Director’s impact - "The effect on productivity was astounding because it allowed us to concentrate on new services to support GHY's business strategy. We were able to add hundreds of thousands of dollars of value to the business as a result."
The ability to increase resource utilization while reducing costs is a common theme we hear from many of our clients. IT staff’s continually attempt to balance capacity against scheduled (and often unscheduled) workloads. Investing in additional servers to meet capacity, as Forrester Consulting projects in the study, Application Modernization And Migration Trends in 2009/2010, is increasingly less of a viable option. The report projects reductions in IT operating expenses of 36% and capital expenditures of 32%. Companies as B C Jindal have used IBM Systems Director to monitor, understand and proactively identify the impact of demand on the company's IT infrastructure. With this insight the IT staff was able to maximize utilization of server resources and reduce the number of server purchases and capital expenditures by an estimated 50 percent annually.
Achieving additional business agility and flexibility is an issue driving IT departments to Cloud. Taking weeks or months to deploy resources for new applications, workloads or services is a luxury. Codorníu, the Chinese city of Wuxi and China Telecom's Jiangxi Subsidiary all turned to IBM Systems Director to quickly deploy and manage new services and applications. Codorníu, Spain's leading sparkling wine producer, implemented a Cloud infrastructure centrally managed by IBM Systems Director and cut its administration costs in half while reducing its data center by 70%!
The Chinese city of Wuxi and China Telecom's Jiangxi Subsidiary used IBM Systems Director to quickly deploy shared, revenue generating services, based on a "pay for what you use" model. For China Telecom's Jiangxi Subsidiary using IBM Systems Director enabled a more fluid and flexible business model by reducing time to deployment from three to four months to two to three days. Wuxi's Cloud Center used IBM Systems Director's "single pane of glass" to centrally manage compute resources for local businesses, helping them avoid capital expenditures for hardware.
IBM Systems Director can help IT reduce the amount of time it takes to install, deploy, discover monitor and manage resources in today's highly complex infrastructures. IBM's systems management approach, leveraging a "single pane of glass" provides enhanced visibility and control of a heterogeneous environment, enabling IT administrators to achieve maximum utilization of data center resources, which can result in decreasing data center costs and increasing productivity.
Want to learn more?
Click on the link to download the white paper – “IBM Systems Director: Optimized and simplified management of IT infrastructures.” The paper describes how IBM Systems Director can help you speed deployment of cloud implementations, achieve maximum utilization of data center resources while improving IT staff productivity.
Virtualization+IBM 2700039S5C 2,444 Views
(Originally posted on The Storage Alchemist blog)
After a full first day at VMworld, I started to think more about IBM and their technology solutions that help customers in a VMware environment. Here is a top ten list of things to consider when looking at a VMware implementation and how IBM can help.
VMware is playing Switzerland and ensuring all vendors are on a level playing field, so when other vendors state that they have “better” or “closer” technology integration than other vendors its probably not true. Some vendors may not choose to integrate with certain things, but rest assured, all of VMware’s APIs are open to all vendors. Take a look and see how IBM is providing plug-ins for vSphere, SRM, and VAAI in XIV as well as other storage platforms.
#2 Ease of Use
IBM has seen, firsthand, a number of our customers switch from one platform to XIV because of their pleasure in the simplicity of the XIV solution. A large manufacturer is one example of a customer who is provisioning new VMware instances in less than five minutes.
One XIV customer, who is a very experienced storage administrator, saw the XIV GUI and quoted "I don't get it (XIV GUI). It can't be that easy. Either I'm missing something or they are not showing me everything." The reality is, it is that easy and that interface is prolific throughout the IBM storage portfolio including the Storwize V7000 and SVC.
#3 Storage Efficiency
Probably one of the most important topics this year is Storage Efficiency and IBM is a leader in this department. The Storwize V7000 utilizing compression or N-Series with the Real-time Compression appliance can reduce the VMware storage footprint up to 75%. Users tell us that by implementing VMware, their storage footprint has grown by as much as 4x. Therefore their overall IT budgets didn’t get better, the dollars just shifted from servers to storage. IBM’s Real-time Compression users can save up to 75% without any performance impact. Additionally, Real-time Compression is the only compression technology that works in conjunction with deduplication, compressing the data before it is dedplicated, giving an added benefit to the technology.
Now users have the opportunity to get their overall IT budget back under control.
#4 Data Protection
The reality here is that many enterprises are waiting for the war to be fought out between the vendors in this space, or looking to embedded snapshots and disk based technologies with mirroring to cut out all of the host based challenges with data protection.
A report by Taneja Group, sponsored by multiple clients, suggests that the biggest issue in virtual environment is data protection as many enterprise do not know what they need to do and they are looking at their current vendors to provide solutions. So work closely with the IBM team and leverage all of the work that IBM has done with Tivoli and VMware to help solve your data protection challenges.
A lot of folks like to talk about deduplication when it comes to VMware, just make sure it is implemented properly and at the right place. ProtecTIER has as good a deduplication ratios and great performance.
I am not sure how you get more flexible from IBM. From hardware to software to services to partners, IBM offers solutions across a wide spectrum. Whether it be hardware solutions that can meet a range of performance requirements and application types, to software that can help users analyze their data more effectively. IBM can also deliver all of these solutions through our relationships with or ISVs as well as partners offering superior flexibility.
When it comes to high availability in storage, it is hard to beat the new V7000 or the XIV product. Innovatively designed specifically high availability, users can move to a virtualized storage platform such as XIV and users can see the real-world of availability and reliability that does not sacrifice performance in any of their applications.
With IBM XIV, you can simple scale as you need to and automatically and take advantage of new capacity and linear performance improvements as well as managing the entire enterprise from a single, easy to use GUI.
Also, with Real-time Compression, you now have the added benefit of putting more capacity in your existing footprint to do even more analytics while saving on footprint, power and cooling – all in real-time.
#8 Services / Solutions
IBM is the worldwide leader in providing services. IBM is the largest OEM of VMware solutions on the planet and provides support and services in 170 countries around the globe. IBM’s Global Services team has architected and installed hundreds, if not thousands of VMware implementations, helping customers go from a non-virtualized to a virtualized world. IBM, as well as its partners, can help migrate customer's virtualized environment without a long outage and maintain application and customer production as well as move to Thin Provisioning, and a truly virtualized platform not Vblocks and a coalition.
#9 TCO / ROI
IBM offers great solutions that reduce the risk, cost, and complexity of the virtualized world. IBM focuses on the real-world customer challenges. Customers have been hit hard these last few years when it comes to budgets in order to manage their IT environments. We keep helping our customers do more with less by enabling a more efficient storage platform than any other vendor. IBM XIV, V7000, N-Series, SVC and ProtecTIER solutions are great fit for solving difficult VMware challenges and we have real-world references to prove it.
#10 100 Years of Innovation
The bottom line: there is always more to do, IT changes at a rapid pace and it is the vendors job to keep up with the needs of its customers. IBM has been doing this for 100 years and we will continue to do so.
Virtualization+IBM 2700039S5C 2,304 Views
Why IBM SONAS?
As dependencies on today’s enterprise business computing increase, ensuring that applications are highly reliable becomes more critical. Constant outpouring of data by various day to day enterprise business applications has new storage challenges for today’s enterprise business IT environment.
The VMware vSphere makes it simpler and less expensive to provide higher levels of availability for mission-critical enterprise business applications.
The IBM Scale Out Network Attached Storage (SONAS) provides extreme scale out capability, with a globally clustered network-attached storage (NAS) file system built upon IBM General Parallel File System (IBM GPFS).
The IBM SONAS is the best in class storage solution that provides performance, clustered scalability, high availability (HA), and functionality that are the essential demands for enterprise IT virtual infrastructure.
VMware vSphere delivers higher levels of availability with VMware HA and VMware Fault Tolerance (FT) features.
Integrated VMware vSphere and IBM SONAS virtual IT infrastructure, meets the demand of high availability and massive scalability in terms of performance and storage capacity of enterprise IT virtual infrastructure.
Economic challenges drive enterprise business to provide high levels of enterprise business application availability, performance, and extreme scalability while simultaneously achieving greater levels of cost savings and reduced complexity. As a result, data center infrastructure increasingly virtualized because virtualization provides compelling economic, strategic, operational, and technical benefits. Planning a robust, highly-available, and extremely-scalable infrastructure solution for enterprise virtual data center environments hosting mission-critical applications is of utmost importance.
Some of the key aspects of an effective high-availability virtualized infrastructure include:
Operational efficiency and management simplicity
VMware vSphere provides uniform, cost-effective failover protection against hardware and software failures within an enterprise virtualized IT environment with VMware high availability and fault tolerance features.
The traditional NAS filers do not scale to high capacities. When one filer was fully utilized, a second, third, and more filers were installed. As a result enterprise business IT administrators very often find themselves in the managing silos of filers. Capacity on individual filers could not be shared. Some filers were heavily accessed while others were mostly idle.
The SONAS system is available in as small a configuration of 27 terabytes (TB) in the base rack, up to a maximum of 30 interface nodes and 60 storage nodes within 30 storage pods. The storage pods fit into 15 storage expansion racks. The 60 storage nodes can contain a total of 7200 hard disk drives when fully configured using 96-port InfiniBand® switches in the base rack. The SONAS advanced architecture virtualizes and consolidates multiple filers into a single, enterprise-wide file system, which can translate into reduced total cost of ownership, reduced capital expenditure, and enhanced operational efficiency.
Assuming 2 TB disk drives with fully configured SONAS system can scale up to 14.4 petabyte (PB) or raw storage and billions of files in a single large file system. SONAS system can have as few as eight file systems in a fully configured 14.4 PB or as many as 256 file systems. IBM SONAS provides:
Integrating VMware HA and VMware FT technologies and petabyte scale IBM SONAS offers one of the best in class value proposition. The combination of VMware vSphere and petabyte scale IBM SONAS provides a simple and robust high availability solution for planned and unplanned downtime in a virtual enterprise IT data center environments hosting mission-critical applications.
For more information on IBM SONAS powered virtual IT infrastructure please read following technical reports:
Udayasuryan A Kodoly
Virtualization+IBM 2700039S5C 2,224 Views
There are many important considerations for IT decision-makers with virtualization. Top of mind to enterprises are the systems that need to be in place to support virtualization: servers, networks and storage. But it’s important that IT also understands the virtualization implications in the middleware and application stack. For many organizations, this is an afterthought until systems administrators actually begin to virtualize their application workloads. Understanding application virtualization is critical because one particularly large IT company has been causing quite a bit of confusion and pain for enterprises trying to virtualize their workloads. And unfortunately, many IT organizations are caught completely by surprise by the company’s stance.
If you haven’t yet clicked on the hyperlink above, I’m referring to Oracle’s draconian policies on virtualization. Industry analysts and pundits have been warning enterprises for months about Oracle’s inflexible licensing terms and conditions that require extra licensing costs to support key virtualization features, charge companies for processors they do not use, and do not provide support to any leading virtualization platform such as VMware, KVM or Hyper-V. This last issue, hypervisor support, is especially troublesome because now there is a perception among some enterprises that all IT providers have similar policies. Take this quote from a recent SearchServerVirtualization.com article for example:
While that statement describes Oracle’s approach to a “T”, the truth is that not all companies operate the same way with virtualization. In IBM’s case, our middleware and software can support deployments running in all leading virtualization platforms including VMware, KVM, Hyper-V, Xen, PowerVM and z/VM. We let our clients choose which platform is right for them, and we even support the entire “mixed” stack if that is what the client prefers. We make it easy for enterprises to complete their virtualization journey, leveraging existing IT investments and applications, with no surprises along the way.
Now that you know, don’t fall for Oracle’s virtualization trap.
Twenty years ago, who would have thought Linux would evolve into the biggest collaborative development project in the history of computing? In 2000, IBM announced a $1 billion dollar investment in Linux, taking the technology from a very successful science project to a major force in business IT. Not only was this a turning point for Linux and the Linux community, it was also a pivotal moment in IBM's history. This investment was one of the first times IBM made a decision to embrace open source software and make it core to its business strategy.
IBM's involvement and significant investment in Linux has allowed IBM to take on a leading role in the advancement of open source technologies, offering its clients choice, lower costs and interoperability. Most recently, IBM helped found the Open Virtualization Alliance to further customer adoption of open source virtualization technologies such as KVM the Linux-based hypervisor.
As IBM celebrates its Centennial this year, this investment in Linux remains one of the key moments in IBM's history (read more about that in the IBM 100 Icons of Progress article “Linux: The Era of Open Innovation") And Linux, continues to be a fundamental component of IBM business—embedded deeply in hardware, software, services and internal development.
IBM is not the only one celebrating an anniversary this year. The Linux Foundation is officially celebrating 20 years of Linux (#linux20th) this week at LinuxCon North America in
IBM is Platinum Sponsor of LinuxCon North America. Key contributors from the IBM Linux and KVM development and strategy teams are heading to LinuxCon for a variety of activities including:
For more details about IBM’s activity at LinuxCon, click here.
In its Centennial year, IBM is especially excited to celebrate 20 years of Linux at LinuxCon North America 2011. We wish The Linux Foundation and all LinuxCon attendees a fantastic week in
WW Cross-IBM Linux and Open Virtualization Marketing
IBM Systems and Technology Group
Virtualization+IBM 2700039S5C 6,491 Views
When taking advantage of virtualization to flexibly and conveniently manage workloads, a new security model should be considered. In a traditional environment, where a single operating system runs on it's own private hardware, the central attack vector is through the network. However, in a virtualized environment, where multiple guest operating systems are sharing the same host operating system and hardware, the attack vector is not only external to the system and through the network, but also internal from within the system. In this post, we'll touch on a number of steps that can be taken to minimize the security exposures presented in your KVM environment.
For more details on the topics discussed below, please take a look at the KVM Security blue print located at: http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=/liaat/liaatseckickoff.htm. It contains a thorough discussion on these topics, including several configuration examples.
Minimize vulnerabilities by minimizing the TCB
The trusted computing base (TCB) is the combination of hardware and software in a computer system that is trusted to enforce security for the system. The TCB must be verified and maintained regularly to make sure it is correct. A smaller TCB naturally results in a smaller chance of having a bug in the TCB. Therefore, the size of the TCB has a direct effect on the security quality of KVM. To reduce the TCB, for example, you can turn off unused daemons and remove unnecessary packages from the host operating system.
Separate host and guest
networks to protect the host
Prevent MAC address
traffic within a bridged network
Isolate virtual machines
from each other
Be sure that storage
devices are secure
Perform secure remote
Spice can be used to provide high-quality remote access to KVM and supports OpenSSL authentication and encryption. Spice is not described in the KVM Security blue print that was referenced earlier in this post. For more information on Spice, see: http://www.spice-space.org/.
Limit virtual machine
resources with control groups (cgroups)
Protect data at rest
with disk-image encryption
Audit host and guests to
obtain valuable forensic information
When running a virtualized environment, security is a critical part of the solution that cannot be overlooked. So be sure to evaluate and consider all of security features mentioned above before deploying KVM.
Virtualization+IBM 2700039S5C 15,424 Views
Over the past years x86 virtualization has become widespread through server consolidation and recently it is playing a role at the heart of cloud computing. KVM provides a virtualization solution with world-class performance together with the benefits of an open source platform. This post explains the key components of KVM and how they work together.
Hardware virtualization from Linux kernel
KVM is closely associated with Linux because it uses the Linux kernel as a bare metal hypervisor. A host running KVM is actually running a Linux kernel and the KVM kernel module, which was merged into Linux 2.6.20 and has since been maintained as part of the kernel. This approach takes advantage of the insight that modern hypervisors must deal with a wide range of complex hardware and resource management challenges that have already been solved in operating system kernels. Linux is a modular kernel and is therefore an ideal environment for building a hypervisor.
Full Linux hardware support for network cards, storage, and servers
Since KVM uses the Linux kernel, KVM works with network cards, storage adapters, and other hardware supported by Linux. This gives KVM excellent host hardware support that does not lag behind bare metal operating systems.
Hardware virtualization extensions provide secure and efficient way to run VM code on physical CPU
At the heart of KVM is a Linux kernel module which safely executes guest code directly on the host CPU. This is made efficient by hardware virtualization extensions, introduced in the mid-2000s by both AMD and Intel and available in almost all modern x86 processors. Virtualization extensions added a new mode of execution that allows unmodified guests to run without giving them full access to memory and other resources.
Device emulation in user space
While guest code executes directly on the host CPU in a safe manner, most I/O accesses are trapped instead of sending them directly to host devices. The guest sees an emulated chipset and PCI bus on which both emulated and pass-through adapters can be added. KVM features paravirtualized networking, storage, and memory ballooning drivers that improve efficiency of I/O and allow adjusting the amount of RAM available to a guest at run-time.
Runs with SELinux isolation
Device emulation is performed by the qemu-kvm user space process on the host. This allows the kernel module to stay lean and focus on the most performance-critical aspects while userspace device emulation emulates hardware devices in an isolated process outside of the host kernel. The sVirt feature locks down the qemu-kvm process with SELinux Mandatory Access Control so it can only access files and resources it needs and nothing more.
Secure remote management API
Management tools need to monitor and access guests that might be running on remote hosts or locally. This is done through a set of APIs and utilities that enable applications to manipulate guests and automate management tasks. Libvirt provide the language bindings and command-line utilities for developing applications and scripting common operations.
Each host runs the libvirt daemon, which provides secure remote management APIs but it can also be configured to serve locally only and not be visible over the network. The libvirt daemon maintains guest configurations across reboot and is the central point for setting up networking and storage pools.
Systems management can be added and uses libvirt API
Most administration is done with tools that use the libvirt API, especially the virsh command-line tool which presents guest and host management operations. The graphical virt-manager tool can easily manage local or remote guests. Third-party management tooling such as cloud stacks can be used for higher-level datacenter or cloud management and they typically integrate with libvirt.
This completes the short trip through KVM, starting from the core hypervisor which is implemented as a Linux kernel module, through the device emulation by qemu-kvm, and the secure remote management API provided by libvirt. To consumers of KVM, most functionality is abstracted behind the management tool but its architecture determines its key strengths including excellent performance and a constantly growing tools ecosystem.
Stefan's blog on vmsplice
Virtualization+IBM 2700039S5C 2,130 Views
I had the pleasure of being the IBM speaker on an eWeek web seminar yesterday. The topic was cloud computing but virtualization played an important role in the discourse. We had a lively discussion about the difference between cloud and virtualization. Some believe they are one in the same but I disagree. To me Cloud Computing is a model – a concept that outlines a delivery methodology for various types of services where virtualization represents technology – real products you can buy today. One thing we did agree on – no matter how you classify it virtualization is they single most-important enabling technology for cloud.
Virtualization is more than Consolidation.
Virtualization in all its forms enables consolidation. It allows you to take multiple racks of 1U servers and squeeze them into a much smaller form factor that’s easier to power, cool and manage. It provides a way to take hundreds of disk drives and manage them as a storage pool driving higher utilization and reducing administrative costs. Virtualization allows you to break the ties between workloads and the physical devices on which they run giving the IT staff the flexibility to make sure critical workloads always have the resources they need.
There are other forms of virtualization we can consider as well. Application virtualization provides another way to separate workloads from their physical environment. We can illustrate this by first looking back to the days of “scale out” architecture in data centers. If you needed an additional web server you’d buy another 1U server, configure it with the operating system and web server you needed, test it and add it to your rack. Today you’d use a product like Websphere to create an additional virtual web server which would immediately increase your capacity. Once that capacity is no longer needed Web Sphere can remove it from the operating environment. When you add systems virtualization to application virtualization the possibilities are limitless.
Virtual workloads, virtual machines and virtual appliances are all part of Virtualization. Consolidation and the money it saves is important but it’s only the beginning.
Sam Van Alstyne, Cross-IBM Virtualization Marketing and Strategy
Virtualization+IBM 2700039S5C Tags:  software systems wintergreen virtualization research ibm 3,211 Views
Reposted with permission from Susan Eustis. To view the original posting visit wintergreenblog.com
IBM Systems Software
IBM virtualization is challenging existing data center solutions by providing a full suite of zVM, PowerVM, and open source VM to support implementation of images in a tuned to task manner. As part of the smarter computing initiative, IBM has positioned to achieve increased automation of the workload systems management and the computing environment in a manner that reduces costs for IT departments.
The IBM stack integrates software and hardware. The integrated stack is used to create appliances that can be launched quickly to get customers up and running with systems that are customized to their own needs. IBM software stack has achieved a level of maturity that means the systems are a cut above what any competitor offers. This maturity means that systems can run on any hardware platform. Users can design applications and then decide in a fit to purpose analysis which platform is most appropriate on which to run that application. High end, high performance applications need a Unix platform, while multiple images numbering in the hundreds or thousands run best on the mainframe. The new zEnterprise 196 is very appropriate for running multiple virtualized images because it manages 10GbEthernet transmission speeds and supports high throughput.
The virtualization business case includes a stack of application server images that run on layers at a very economical cost.
When the application images exceed the size of an existing LPAR, and there is a need to buy a new LPAR, the economics are no longer favorable and the overflow portion of the application images need to move to the cloud until there is enough workload to justify investment in another LPAR.
In the same manner, workloads that are being upgraded or migrated will run in the cloud temporarily until they can be run on the upgraded system. In this manner, the IBM cloud is being positioned as a transitional hosting space.
In addition, IBM has positioned the cloud as a self provisioning computing platform that offers optimal pricing and better access to users than traditional platforms. The IBM cloud is positioned as a way to achieve optimum use of resource, creating a way to achieve full utilization of the available resources. IBM is able to achieve this because of the good use of storage and networking management in combination with good use of server management of capacity.
Virtualization+IBM 2700039S5C 1,970 Views
For many organizations today, that lesson would be well applied in the data center. Data center workloads are often fulfilled via standard system types—blades, for instance—on this faulty premise: workloads are fundamentally similar, and they can therefore be fulfilled using fundamentally similar technology.
Adding more services or applications? Scaling up performance to meet unpredictable demand levels? The old-school response to both questions has often been simply this: “Add more blades, and let the workloads take care of themselves.” The business reality today, however, demands quite a different response. We live in a time of rapid change; IT must change in parallel. Business workloads in almost all enterprise-class organizations have become more diverse, more complex, and more dynamic. Fulfilling them optimally requires a new approach—one that acknowledges the growing need for an improved business outcome from the IT infrastructure, and that moves beyond the standard response of rolling out more hardware. Rather than simply deploy standard platforms on the assumption that they can be mapped to any given workload, IBM believes that organizations should consider shifting the focus to the workloads themselves. That is, organizations should analyze what a given workload requires now, and is likely to require in the future, to meet business targets. They should then ask themselves how best those workloads can be fulfilled, via infrastructural deployment and integration, to improve service levels, reduce costs and mitigate business risks. Failing to do so is very likely to result in a suboptimal return on investment from the IT infrastructure. In today's difficult business climate, that can easily translate into the difference between success and failure. Optimizing different workloads requires different solutions To illustrate this point, consider the following four business workload classes, all four of which are in common use by businesses today. Each class of workload is fundamentally distinct from the others in terms of its resource requirements and the systems best used to fulfill it. How should organizations ideally fulfill the requirements of such fundamentally different workload types? As we have seen, no two of these workloads are characterized by identical challenges; no two, similarly, demand identical resources. It stands to reason that no two can be best fulfilled using identical platforms. And the organization that ignores the varying nature and details of these workloads, instead simply deploying more blades in a generic fashion, is not likely to get the best business outcome. Work with IBM to develop a workload-optimized, dynamic infrastructure IBM offers a compelling alternative: the concept of the dynamic infrastructure. This is best understood as a flexible, scalable infrastructure capable of assigning infrastructural resources dynamically, in accordance with changing business requirements, via the convergence of IT and business management. It benefits from IBM's deep and proven expertise in assisting organizations of all kinds as they strive to optimize their workloads, and it can also be tailored to match any organization's unique context and requirements. Naturally, no two organizations have the same goals, resources, challenges or workloads. No two organizations, similarly, will implement a dynamic infrastructure in the same manner. Fortunately, IBM offers a complete range of hardware, software and services from which a custom solution can be developed—a tailored, workload-optimized dynamic infrastructure capable of generating truly superior business value. Among other elements available to clients for this purpose: Furthermore, workload optimization is a core element of every aspect and phase of the dynamic infrastructure migration. The IBM process of developing a dynamic infrastructure, in fact, begins not with technology per se, but with organizational workloads. Their attributes—and the goals and requirements associated with them—drive system requirements, which inform and determine optimal system design, which is then optimized still further to ensure workload fulfillment. In this way, IBM keeps the focus where it belongs: on business goals and the many ways technology can help fulfill them, both efficiently and cost-efficiently, both today and tomorrow.
The business reality today, however, demands quite a different response. We live in a time of rapid change; IT must change in parallel. Business workloads in almost all enterprise-class organizations have become more diverse, more complex, and more dynamic. Fulfilling them optimally requires a new approach—one that acknowledges the growing need for an improved business outcome from the IT infrastructure, and that moves beyond the standard response of rolling out more hardware.
Rather than simply deploy standard platforms on the assumption that they can be mapped to any given workload, IBM believes that organizations should consider shifting the focus to the workloads themselves. That is, organizations should analyze what a given workload requires now, and is likely to require in the future, to meet business targets. They should then ask themselves how best those workloads can be fulfilled, via infrastructural deployment and integration, to improve service levels, reduce costs and mitigate business risks.
Failing to do so is very likely to result in a suboptimal return on investment from the IT infrastructure. In today's difficult business climate, that can easily translate into the difference between success and failure.
Optimizing different workloads requires different solutions
To illustrate this point, consider the following four business workload classes, all four of which are in common use by businesses today. Each class of workload is fundamentally distinct from the others in terms of its resource requirements and the systems best used to fulfill it.
How should organizations ideally fulfill the requirements of such fundamentally different workload types? As we have seen, no two of these workloads are characterized by identical challenges; no two, similarly, demand identical resources. It stands to reason that no two can be best fulfilled using identical platforms. And the organization that ignores the varying nature and details of these workloads, instead simply deploying more blades in a generic fashion, is not likely to get the best business outcome.
Work with IBM to develop a workload-optimized, dynamic infrastructure
IBM offers a compelling alternative: the concept of the dynamic infrastructure. This is best understood as a flexible, scalable infrastructure capable of assigning infrastructural resources dynamically, in accordance with changing business requirements, via the convergence of IT and business management. It benefits from IBM's deep and proven expertise in assisting organizations of all kinds as they strive to optimize their workloads, and it can also be tailored to match any organization's unique context and requirements.
Naturally, no two organizations have the same goals, resources, challenges or workloads. No two organizations, similarly, will implement a dynamic infrastructure in the same manner. Fortunately, IBM offers a complete range of hardware, software and services from which a custom solution can be developed—a tailored, workload-optimized dynamic infrastructure capable of generating truly superior business value.
Among other elements available to clients for this purpose:
Furthermore, workload optimization is a core element of every aspect and phase of the dynamic infrastructure migration. The IBM process of developing a dynamic infrastructure, in fact, begins not with technology per se, but with organizational workloads. Their attributes—and the goals and requirements associated with them—drive system requirements, which inform and determine optimal system design, which is then optimized still further to ensure workload fulfillment.
In this way, IBM keeps the focus where it belongs: on business goals and the many ways technology can help fulfill them, both efficiently and cost-efficiently, both today and tomorrow.
Virtualization+IBM 2700039S5C 2,247 Views
Virtualization enables workload optimization by optimizing systems and system management
Optimizing workloads—to meet or exceed target service levels while using the fewest resources possible—is a major goal for the enterprise today.
However, before workloads can be optimized the systems that drive them must be optimized first. And system optimization is exceptionally hard to achieve in a conventional infrastructure, in which services are tied on a one-to-one basis with commodity hardware such as low-end x86 systems. Commonly, such an infrastructure will be idle more than 90 percent of the time—generating costs but not business value. And should more resources be required for an unexpected spike in workloads, those resources may not be available.
Virtualization represents a much better approach, through which workload resources can be shifted in real time wherever they are required, service levels can be enhanced as a result, and both idle time and overall costs can be minimized—essentially, a vision of workload optimization. But realizing this vision via a virtualized infrastructure will also typically mean moving to a new management paradigm.
A modern solution such as IBM Systems Director will be needed in order to consolidate and simplify overall management by tracking status/health levels of different servers and hosts, and by fulfilling everyday tasks such as software provisioning and problem isolation. Systems Director elegantly unifies management across multiple operating systems, IBM server groups and certain non-IBM servers—taking the focus away from the details of the technology per se and turning it toward the optimized utilization of the IT infrastructure in the pursuit of business goals.
Create and manage system pools with IBM Systems Director VMControl Enterprise Edition
Now, IBM has taken the next evolutionary step in system optimization through virtualization management.
IBM Systems Director VMControl Enterprise Edition, a plug-in extension that works within the general Systems Director environment, allows the enterprise to create virtual system pools: groups of virtualized resources (servers, storage and network). Because they can be managed as a single entity, system pools thus function as building blocks that administrators can use to optimize systems more easily, more quickly and more consistently—mitigating business risks by enhancing availability, reducing costs by better linking resource allocation to business demands and driving service levels to new heights.
To see how system pools work, begin with the fact that successful virtualization will almost always require careful management of system images. Images contain the complete software stack of operating system, middleware, applications, data and other elements required for a virtual server; IT organizations will therefore usually have many images created for many business purposes. When the number of images proliferates, management complexity scales as well, and with it, costs and risks. These challenges demand a fast, efficient and consistent solution to manage images, one designed to take advantage of best practices and yet also adjust easily to the unique demands of a particular organization’s context. They also demand a more holistic, comprehensive approach to managing the overall infrastructure, in order to reduce the number and complexity of management tools as much as possible.
VMControl Enterprise Edition, utilized within IBM Systems Director, represents just such a solution. VMControl Image Management features provide a way for managers to capture system images and store them in a library. Subsequently, they can quickly and easily be provisioned to any target virtual system, and even customized with specific elements that may be required, such as drivers or data. This approach delivers a number of significant wins: much more consistent image deployment, improved security, simplified regulation compliance, higher system availability, faster time-to-value for virtual systems and the services they support and, generally, lower costs.
Once provisioned, virtual systems can themselves be clustered and managed as a logical group—a system pool—and dynamically assigned to changing business demands in real time. This is the heart of the system pool concept: extending virtualization across host systems to render resource utilization even more fluid and cost-efficient while reducing management complexity even further. One system pool is simpler, easier and less expensive to manage than a variety of hardware hosts running a variety of virtual servers.
Get transparent updates, automated resilience and minimized downtime
System pools, as managed by IBM Systems Director VMControl, thus represent a great way to optimize systems. Because resources can be even more closely, quickly and easily paired with business demand, waste is minimized, and yet service level targets are invariably hit. The fact that multiple physical hosts are deployed in the pool, each itself running multiple virtual servers, is no longer directly relevant, and managers need only concern themselves with bigger-picture business goals and how well they are being fulfilled holistically.
Many specific benefits will accrue as well. For instance, consider the common business challenge of service outages; these might occur either on a planned basis (in order to carry out firmware or software updates) or an unplanned basis (due to catastrophic, unpredictable system failure). Both situations are substantially improved via VMControl-managed system pools.
Imagine a data center in which system pools have been deployed and in which one of the hardware hosts in a pool has failed. Because that pool can be monitored and managed as a logical whole, failure of one host does not translate into failure of the pool. An administrator can simply shift the services from the failed host (or any group of them) to other virtual systems within that pool or across pools—dramatically decreasing the negative business impact of the failure.
This approach, when combined with policy-driven management tools, can be automated as well. Should monitoring tools detect the failure, conditions of a logical policy will be fulfilled, and the policy will be executed. The service supported by the failed host will be automatically transferred to a healthy system, along with whatever necessary computational resources are required to optimize its workload. At no point will an IT team member be required to take action or even notice the existence of the problem. Overall downtime and costs dramatically fall, and workloads are fulfilled in a far more optimized fashion.
(In fact, if in this scenario, the organization wisely selects best-in-class hardware such as IBM Power Systems hosts, imminent physical failure can be anticipated and reported automatically to VMControl, which can then take appropriate, policy-driven action. In this scenario, the business impact of the hardware failure is zero.)
VMControl can effectively make planned outages a thing of the past as well, because services need not be taken offline for systems to be updated. They can simply be shifted temporarily to another logical location while updates are applied to the original systems and subsequently shifted back. Users and customers need never know, or care, that an update took place at all. Overall service availability and resilience of the data center climbs as a result, and with it user productivity (for internal services) and customer satisfaction and revenues (for external services).
Gradually develop a cloud over time
System pools can also be seen as a logical stage (or building block) in the development of a full cloud computing environment. A cloud represents an even higher level of abstraction in which multiple system pools combine to flexibly and scalably deliver all the necessary resources for optimization across as many business contexts, systems, services and applications as needed, and yet the cloud itself is managed as an integrated, holistic entity.
Not all organizations are prepared to transition to the cloud computing today, though. For those seeking a more gradual migration, at a pace that matches their requirements going forward, IBM Systems Director VMControl Enterprise Edition can make that possible—delivering substantial business wins today and laying the foundation for more tomorrow.
Virtualization+IBM 2700039S5C 2,842 Views
Here's a question every CIO will find familiar: How can I get more business value from IT?
Virtualization is certainly a step in the right direction. Virtualization can make the data center more responsive to changing conditions and more scalable and flexible in the pursuit of business goals—in short, more business agility. And, if implemented effectively, virtualization can also reduce costs.
However, there is a big difference between an idea and its implementation. And for organizations interested in taking advantage of virtualization to create more business value, there are certainly better and worse ways to go about it.
Because each organization has a unique context—characterized by a unique set of assets, processes, strengths, weaknesses and business risks—each requires a unique strategy to leverage virtualization to maximize their results.
IBM can help customers improve their business agility by leveraging the latest virtualization technologies with integrated service management. IBM can help in four distinct areas or virtualization priorities that CIOs can use to create a tailored virtualization strategy for their specific organizational needs.
Each priority can build on the others. And while many organizations will get excellent results pursuing these priorities sequentially, they can be pursued in any combination depending on the customer’s particular needs and priorities. Collectively, all four can help drive down costs, drive up performance, and enhance business agility.
And together, they help ensure that the benefits of a virtualized infrastructure are not merely an abstract possibility, but an operational reality—a reality that's been tailored for the best business outcome.
Priority: Consolidate resources
This is probably the most familiar to many of today’s organizations because it reflects the reality of most of today's data centers: thousands of lightly used systems, in some cases utilized 20% of the time, but each continually drawing power and generating heat.
Consolidating onto fewer, systems, with higher utilization levels can deliver better business results. This type of an IT architecture can make the data center less physically complex, saving valuable floor space, while also minimizing the possible points of failure. The direct result: by consolidating systems, IT becomes more efficient and reduces ongoing operating costs.
Virtualization is a key technology to effectively consolidate. Furthermore, the extensive physical resources of each host (processing power, memory, or storage) can be allocated among those virtual servers flexibly, in proportion to changing business requirements. This means that application and service performance can scale to meet demand.
Priority: Manage workloads
The management challenge can often increase through virtualization. Where one application or service was previously supported by one physical system—a relatively simple paradigm—that may no longer the case. Now, there may be hundreds of virtual servers per system, and there may be hundreds of system images associated with those virtual servers.
Managers may need more time to track problems to root causes, or implement new strategies than they did before. The increased complexity can generate unexpected costs over time, diminishing the potential benefits promised by virtualization.
Organizations need to simplify the overall management challenge via best-in-class, centralized tools that are specifically designed for a virtual infrastructure, allowing managers to easily track status levels, perform everyday tasks, and implement any required form of change, all within an integrated systems management dashboard.
Priority: Automate processes
Automation can deliver lower operational costs, and deliver a faster response time in a consistent manner by minimizing the need for human intervention. While complex, sophisticated tasks will certainly require a dedicated human intelligence, they should be the exception, not the rule.
As a simple example, consider the sequence of events involved in provisioning a virtual server with an appropriate stack of software: OS, applications, middleware, drivers, data, and other elements. An automatic provisioning process can be dramatically faster than a manual provisioning. It can also be completely consistent from case to case. This means that the potential of any service or application failing due to inadvertent configuration errors can be minimized.
Automation can enhance a virtualized infrastructure in many such respects. For instance, service level agreements that specify performance levels can more easily be met if they are dynamically fulfilled through automated processes. Achieving regulatory compliance with government mandates can be simpler and less expensive. Crucial processes such as disaster recovery, involving predictable actions based on known resources, can certainly be faster and more complete if automated, meaning the organization can minimize the adverse business impact of a disaster.
In short, automation can help make virtualized infrastructures more efficient, cost-efficient, responsive, and available.
Priority: Optimize service delivery
The pinnacle of a virtualized infrastructure allows organizations to optimize the delivery of services that are tightly aligned with the organization’s—usually by empowering the business user to determine and control their IT priorities as directly as possible.
An example of this type of a delivery model is cloud computing. Utilizing a cloud computing infrastructure can cut the time required to translate a new idea into an actual running service. The time can be dramatically reduced from weeks to hours.
The result dramatically increases an organization’s business agility with a lower price tag—both today and tomorrow.