Blog Authors: IBM Software Defined 2700052JD4 Virtualization+IBM 2700039S5C Nitin_Gaur 12000056JB Jean Staten Healy 2700025BBU John_Foley 0600026N82 SamVanAlstyne 110000DM6B alicia_wood 270003DW0M Virtualization combined with Integrated Service Management helps you use your resources effectively, manage your infrastructures efficiently and gain the flexibility to meet ever changing business demands. This blog is for the open exchange of ideas relating to virtualization across the entire infrastructure. Articles written by IBM's virtualization experts serve as conversation starters. Topics can range from latest technologies for server consolidation and tools for simplified systems management and monitoring to automating IT systems to respond to changing business conditions and cloud-based solutions for the "virtual" enterprise.
Virtualization+IBM 2700039S5C 1,022 Visits
"Virtualization without good management is more dangerous than not using virtualization in the first place."—Gartner Group
That's a blunt quote, but its point is well taken. Management should be an integral element of any serious virtualization strategy from the start.
If it's not, the virtualized infrastructure certainly won't deliver its full potential; in fact, it might even be a losing investment. Far from virtualization transforming IT for the better, it will have transformed IT for the worse.
This is why Managing Workloads is one of four key priorities in IBM's new virtualization framework—a modular approach designed to help organizations create a tailored virtualization strategy that will work best for them.
Some organizations may already have in place a modern array of management solutions designed specifically to get the best results from virtualization. For the rest, however, this particular module isn't one they're going to be able to skip.
Virtualization represents such a sea change in the nature of the infrastructure that effective management solutions must be deployed, configured, tested and in use from Day One.
Virtualization+IBM 2700039S5C 1,065 Visits
Simplifying today's highly complex IT infrastructures to speed deployment of cloud implementations, achieve maximum utilization of data center resources while improving productivity are very important challenges facing today's IT staff, which are already stretched very thin. Add to this equation a diverse portfolio of compute, storage and networking platforms each with separate, disparate management tools, which don't communicate, and you have a situation which could bring even the highest performing IT departments and data centers to a grinding halt.
Many of our clients have stressed the need, stated simply, to simplify the complicated puzzle of systems management. The desire to improve productivity, reduce IT costs and "do more with less" while continually pushed to achieve higher levels of service seems to be at the forefront of IT professionals minds.
Diverse clients as Codorníu, Spain's leading producer of sparkling wine, B C Jindal an Indian business conglomerate, GHY a Canadian brokerage house or the Chinese City of Wuxi are each using IBM systems management solutions to address questions as - "how do we effectively deploy and manage resources in the Cloud?" "How do we rapidly install, configure deploy and provision resources quickly and easily?" "How do I effectively pool together resources to meet demand when and where I need them?" And, last but not least, "How do we manage heterogeneous platforms as a single unified entity?"
IBM has taken steps to make systems management of IT infrastructures simpler. IBM Systems Director is making it easier for clients to manage heterogeneous environments using a "single pane of glass" to automate discovery, monitoring and management of IT assets (servers, storage, network devices, energy, physical and virtual resources) and workloads. Previous to GHY International implementing IBM Systems Director their IT staff spent 90% of their time on server management and basic administration. Today, GHY International's staff spends approximately 5% of their time performing the same tasks which enabled faster time-to-value for strategic business initiatives! A GHY executive commented on IBM Systems Director’s impact - "The effect on productivity was astounding because it allowed us to concentrate on new services to support GHY's business strategy. We were able to add hundreds of thousands of dollars of value to the business as a result."
The ability to increase resource utilization while reducing costs is a common theme we hear from many of our clients. IT staff’s continually attempt to balance capacity against scheduled (and often unscheduled) workloads. Investing in additional servers to meet capacity, as Forrester Consulting projects in the study, Application Modernization And Migration Trends in 2009/2010, is increasingly less of a viable option. The report projects reductions in IT operating expenses of 36% and capital expenditures of 32%. Companies as B C Jindal have used IBM Systems Director to monitor, understand and proactively identify the impact of demand on the company's IT infrastructure. With this insight the IT staff was able to maximize utilization of server resources and reduce the number of server purchases and capital expenditures by an estimated 50 percent annually.
Achieving additional business agility and flexibility is an issue driving IT departments to Cloud. Taking weeks or months to deploy resources for new applications, workloads or services is a luxury. Codorníu, the Chinese city of Wuxi and China Telecom's Jiangxi Subsidiary all turned to IBM Systems Director to quickly deploy and manage new services and applications. Codorníu, Spain's leading sparkling wine producer, implemented a Cloud infrastructure centrally managed by IBM Systems Director and cut its administration costs in half while reducing its data center by 70%!
The Chinese city of Wuxi and China Telecom's Jiangxi Subsidiary used IBM Systems Director to quickly deploy shared, revenue generating services, based on a "pay for what you use" model. For China Telecom's Jiangxi Subsidiary using IBM Systems Director enabled a more fluid and flexible business model by reducing time to deployment from three to four months to two to three days. Wuxi's Cloud Center used IBM Systems Director's "single pane of glass" to centrally manage compute resources for local businesses, helping them avoid capital expenditures for hardware.
IBM Systems Director can help IT reduce the amount of time it takes to install, deploy, discover monitor and manage resources in today's highly complex infrastructures. IBM's systems management approach, leveraging a "single pane of glass" provides enhanced visibility and control of a heterogeneous environment, enabling IT administrators to achieve maximum utilization of data center resources, which can result in decreasing data center costs and increasing productivity.
Want to learn more?
Click on the link to download the white paper – “IBM Systems Director: Optimized and simplified management of IT infrastructures.” The paper describes how IBM Systems Director can help you speed deployment of cloud implementations, achieve maximum utilization of data center resources while improving IT staff productivity.
Virtualization+IBM 2700039S5C 1,077 Visits
I had the pleasure of being the IBM speaker on an eWeek web seminar yesterday. The topic was cloud computing but virtualization played an important role in the discourse. We had a lively discussion about the difference between cloud and virtualization. Some believe they are one in the same but I disagree. To me Cloud Computing is a model – a concept that outlines a delivery methodology for various types of services where virtualization represents technology – real products you can buy today. One thing we did agree on – no matter how you classify it virtualization is they single most-important enabling technology for cloud.
Virtualization is more than Consolidation.
Virtualization in all its forms enables consolidation. It allows you to take multiple racks of 1U servers and squeeze them into a much smaller form factor that’s easier to power, cool and manage. It provides a way to take hundreds of disk drives and manage them as a storage pool driving higher utilization and reducing administrative costs. Virtualization allows you to break the ties between workloads and the physical devices on which they run giving the IT staff the flexibility to make sure critical workloads always have the resources they need.
There are other forms of virtualization we can consider as well. Application virtualization provides another way to separate workloads from their physical environment. We can illustrate this by first looking back to the days of “scale out” architecture in data centers. If you needed an additional web server you’d buy another 1U server, configure it with the operating system and web server you needed, test it and add it to your rack. Today you’d use a product like Websphere to create an additional virtual web server which would immediately increase your capacity. Once that capacity is no longer needed Web Sphere can remove it from the operating environment. When you add systems virtualization to application virtualization the possibilities are limitless.
Virtual workloads, virtual machines and virtual appliances are all part of Virtualization. Consolidation and the money it saves is important but it’s only the beginning.
Sam Van Alstyne, Cross-IBM Virtualization Marketing and Strategy
Virtualization+IBM 2700039S5C 1,101 Visits
For many organizations today, that lesson would be well applied in the data center. Data center workloads are often fulfilled via standard system types—blades, for instance—on this faulty premise: workloads are fundamentally similar, and they can therefore be fulfilled using fundamentally similar technology.
Adding more services or applications? Scaling up performance to meet unpredictable demand levels? The old-school response to both questions has often been simply this: “Add more blades, and let the workloads take care of themselves.” The business reality today, however, demands quite a different response. We live in a time of rapid change; IT must change in parallel. Business workloads in almost all enterprise-class organizations have become more diverse, more complex, and more dynamic. Fulfilling them optimally requires a new approach—one that acknowledges the growing need for an improved business outcome from the IT infrastructure, and that moves beyond the standard response of rolling out more hardware. Rather than simply deploy standard platforms on the assumption that they can be mapped to any given workload, IBM believes that organizations should consider shifting the focus to the workloads themselves. That is, organizations should analyze what a given workload requires now, and is likely to require in the future, to meet business targets. They should then ask themselves how best those workloads can be fulfilled, via infrastructural deployment and integration, to improve service levels, reduce costs and mitigate business risks. Failing to do so is very likely to result in a suboptimal return on investment from the IT infrastructure. In today's difficult business climate, that can easily translate into the difference between success and failure. Optimizing different workloads requires different solutions To illustrate this point, consider the following four business workload classes, all four of which are in common use by businesses today. Each class of workload is fundamentally distinct from the others in terms of its resource requirements and the systems best used to fulfill it. How should organizations ideally fulfill the requirements of such fundamentally different workload types? As we have seen, no two of these workloads are characterized by identical challenges; no two, similarly, demand identical resources. It stands to reason that no two can be best fulfilled using identical platforms. And the organization that ignores the varying nature and details of these workloads, instead simply deploying more blades in a generic fashion, is not likely to get the best business outcome. Work with IBM to develop a workload-optimized, dynamic infrastructure IBM offers a compelling alternative: the concept of the dynamic infrastructure. This is best understood as a flexible, scalable infrastructure capable of assigning infrastructural resources dynamically, in accordance with changing business requirements, via the convergence of IT and business management. It benefits from IBM's deep and proven expertise in assisting organizations of all kinds as they strive to optimize their workloads, and it can also be tailored to match any organization's unique context and requirements. Naturally, no two organizations have the same goals, resources, challenges or workloads. No two organizations, similarly, will implement a dynamic infrastructure in the same manner. Fortunately, IBM offers a complete range of hardware, software and services from which a custom solution can be developed—a tailored, workload-optimized dynamic infrastructure capable of generating truly superior business value. Among other elements available to clients for this purpose: Furthermore, workload optimization is a core element of every aspect and phase of the dynamic infrastructure migration. The IBM process of developing a dynamic infrastructure, in fact, begins not with technology per se, but with organizational workloads. Their attributes—and the goals and requirements associated with them—drive system requirements, which inform and determine optimal system design, which is then optimized still further to ensure workload fulfillment. In this way, IBM keeps the focus where it belongs: on business goals and the many ways technology can help fulfill them, both efficiently and cost-efficiently, both today and tomorrow.
The business reality today, however, demands quite a different response. We live in a time of rapid change; IT must change in parallel. Business workloads in almost all enterprise-class organizations have become more diverse, more complex, and more dynamic. Fulfilling them optimally requires a new approach—one that acknowledges the growing need for an improved business outcome from the IT infrastructure, and that moves beyond the standard response of rolling out more hardware.
Rather than simply deploy standard platforms on the assumption that they can be mapped to any given workload, IBM believes that organizations should consider shifting the focus to the workloads themselves. That is, organizations should analyze what a given workload requires now, and is likely to require in the future, to meet business targets. They should then ask themselves how best those workloads can be fulfilled, via infrastructural deployment and integration, to improve service levels, reduce costs and mitigate business risks.
Failing to do so is very likely to result in a suboptimal return on investment from the IT infrastructure. In today's difficult business climate, that can easily translate into the difference between success and failure.
Optimizing different workloads requires different solutions
To illustrate this point, consider the following four business workload classes, all four of which are in common use by businesses today. Each class of workload is fundamentally distinct from the others in terms of its resource requirements and the systems best used to fulfill it.
How should organizations ideally fulfill the requirements of such fundamentally different workload types? As we have seen, no two of these workloads are characterized by identical challenges; no two, similarly, demand identical resources. It stands to reason that no two can be best fulfilled using identical platforms. And the organization that ignores the varying nature and details of these workloads, instead simply deploying more blades in a generic fashion, is not likely to get the best business outcome.
Work with IBM to develop a workload-optimized, dynamic infrastructure
IBM offers a compelling alternative: the concept of the dynamic infrastructure. This is best understood as a flexible, scalable infrastructure capable of assigning infrastructural resources dynamically, in accordance with changing business requirements, via the convergence of IT and business management. It benefits from IBM's deep and proven expertise in assisting organizations of all kinds as they strive to optimize their workloads, and it can also be tailored to match any organization's unique context and requirements.
Naturally, no two organizations have the same goals, resources, challenges or workloads. No two organizations, similarly, will implement a dynamic infrastructure in the same manner. Fortunately, IBM offers a complete range of hardware, software and services from which a custom solution can be developed—a tailored, workload-optimized dynamic infrastructure capable of generating truly superior business value.
Among other elements available to clients for this purpose:
Furthermore, workload optimization is a core element of every aspect and phase of the dynamic infrastructure migration. The IBM process of developing a dynamic infrastructure, in fact, begins not with technology per se, but with organizational workloads. Their attributes—and the goals and requirements associated with them—drive system requirements, which inform and determine optimal system design, which is then optimized still further to ensure workload fulfillment.
In this way, IBM keeps the focus where it belongs: on business goals and the many ways technology can help fulfill them, both efficiently and cost-efficiently, both today and tomorrow.
Virtualization+IBM 2700039S5C 1,126 Visits
Do you remember back when applications ran on machines that really were physical servers (all that “physical” stuff that kept everything in one place and slowed all your processes down)? Most folks are rapidly putting those days behind them. Server hypervisors and the virtual machines they manage have improved efficiency (no more wasted compute resources), freed up mobility, and ushered in a whole new “cloud” language.
Well, the same ideas apply to storage. As administrators catch on to these ideas, it won’t be long before we’ll be asking the question “Do you remember back when virtual machines used disks that really were physical arrays (all that “physical” stuff that kept everything in one place and slowed all your processes down)?”
To read more, check out Part I of an important new blog series from our Tivoli Storage team that explains the value of virtualizing storage resources. Also, stay tuned to Parts II and III that take this “storage hypervisor” concept further and into the cloud!
IBM Software Defined 2700052JD4 1,194 Visits
With the help of a robust ecosystem, open source technologies such as KVM become a force to be reckoned with.
What is it that causes some new technologies to gain wide acceptance while others simply fall by the wayside? It’s a given that in order to be meaningful, new technologies must be enterprise-grade, they must be cost-effective, and they must address a real need. And, at least in the open source world, the endorsement of a robust community is the other critical factor. KVM (Kernel-based Virtualization Machine) is a case in point.
KVM has made great progress since its inclusion in the Linux kernel in 2007, observes analyst Gary Chen in a recent IDC white paper. In addition, he notes, the strength of KVM as well as its ecosystem makes KVM an increasingly attractive virtualization choice for customers that rely on Linux and beyond.
The point is: You may have a product, but if you don’t also have an ecosystem, you will hit the “so what” factor. In essence, there is not a complete solution – at least, not until there is a community around it. And the more individuals and companies that contribute code to an open source initiative, include the technology in their products, and provide services related to it, the more polished the solution stack becomes.
Take a look at the ecosystem around KVM and you will find a range of robust communities that aim to address a specific area or requirement. IBM, which has backed open standards and open source technologies for a long time, is a founding member of each. And of course KVM itself is developed by an open source community.
The OpenStack Foundation, for example, is a recent entrant into the open source ecosystem around KVM. Launched as an independent foundation in 2012, the goal of the OpenStack Foundation is to foster cloud interoperability. The OpenStack Foundation serves developers, users, and the entire ecosystem by providing a set of shared resources to grow the footprint of public and private OpenStack clouds. To date, the foundation has more than 9,800 individual members from 87 countries – and has also secured more than $10 million in funding.
The Open Virtualization Alliance, launched in May 2011, is a consortium committed to fostering the adoption of open virtualization with KVM. To date, the OVA counts more than 250 vendors from all over the world among its membership. The consortium advances awareness and understanding of KVM, drives adoption of KVM-based solutions, and helps promote interoperability and best practices to accelerate the expansion of the ecosystem of third-party solutions around KVM – giving enterprises improved choice, performance and price through open virtualization with KVM.
Modeled after the Apache Foundation, Eclipse, LVM, and many other open source communities, the oVirt Project, was launched in December 2011. oVirt develops and distributes an open source virtualization management platform that combines the KVM hypervisor with capabilities for hosts and guests. In this way it supports organizations looking for open alternatives to traditional virtualization technology, both for the hypervisor and virtualization management.
Some individuals and organizations – like IBM – are involved with all three of these groups. Others select the one that meets their own unique interests or needs. But while there is an open invitation to participate, make no mistake – open source communities are merit-based systems. This is a good thing – the communities provide a stimulating combination of competition and cooperation – creating what we call “a friction of ideas.” And this is what ultimately results in high-quality, well-vetted products.
Don’t miss out on the opportunity. Get involved!
Adam Jollans - Program Director, Worldwide Linux and Open Virtualization Strategy, IBM
Virtualization+IBM 2700039S5C 1,244 Visits
There are many important considerations for IT decision-makers with virtualization. Top of mind to enterprises are the systems that need to be in place to support virtualization: servers, networks and storage. But it’s important that IT also understands the virtualization implications in the middleware and application stack. For many organizations, this is an afterthought until systems administrators actually begin to virtualize their application workloads. Understanding application virtualization is critical because one particularly large IT company has been causing quite a bit of confusion and pain for enterprises trying to virtualize their workloads. And unfortunately, many IT organizations are caught completely by surprise by the company’s stance.
If you haven’t yet clicked on the hyperlink above, I’m referring to Oracle’s draconian policies on virtualization. Industry analysts and pundits have been warning enterprises for months about Oracle’s inflexible licensing terms and conditions that require extra licensing costs to support key virtualization features, charge companies for processors they do not use, and do not provide support to any leading virtualization platform such as VMware, KVM or Hyper-V. This last issue, hypervisor support, is especially troublesome because now there is a perception among some enterprises that all IT providers have similar policies. Take this quote from a recent SearchServerVirtualization.com article for example:
While that statement describes Oracle’s approach to a “T”, the truth is that not all companies operate the same way with virtualization. In IBM’s case, our middleware and software can support deployments running in all leading virtualization platforms including VMware, KVM, Hyper-V, Xen, PowerVM and z/VM. We let our clients choose which platform is right for them, and we even support the entire “mixed” stack if that is what the client prefers. We make it easy for enterprises to complete their virtualization journey, leveraging existing IT investments and applications, with no surprises along the way.
Now that you know, don’t fall for Oracle’s virtualization trap.
IBM Software Defined 2700052JD4 1,356 Visits
Last week, IBM was the premier sponsor of the Red Hat Summit in Boston for the ninth year in a row. This conference is a highlight for me each year because it gives both companies the opportunity to showcase the joint solutions we deliver to our clients, hear what mutual customers have to say first-hand, and provide a peak at what will be coming in the year ahead. There is always a lot of energy at the Red Hat Summit spurred by thought-provoking presentations and the unveiling of major innovations. This year was no exception.
Kicking off IBM’s participation in the Red Hat Summit, Arvind Krishna, GM Development and Manufacturing, IBM STG, delivered a keynote in which he announced new IBM initiatives to further support and speed up the adoption of the Linux operating system across the enterprise. Arvind told attendees that IBM is opening two new Power Systems Linux Centers in Austin, Texas, and New York in addition to the Power Systems Linux Center launched in Beijing in May. Arvind also spoke about IBM’s plans to extend support for Kernel-based Virtual Machine (KVM) technology to the Power Systems portfolio of server products – giving IBM Power customers an open choice. The new centers will make it easier for developers to build new applications for big data, cloud, mobile and social business computing using Linux – and in the future, KVM – with the latest IBM POWER 7+ processor technology. Signifying the importance of these announcements, the news was covered widely in the news media, including Forbes' DividendChannel, ZDNet, eWeek, Linux and Life, Computer Business Review, and The Register.
At the Summit, Red Hat, announced the global availability of Red Hat Enterprise Virtualization 3.2, which builds on the industry-leading performance of the KVM hypervisor to offer an enterprise-class data center virtualization and management solution, with fully supported Storage Live Migration and a new third-party plug-in framework. Red Hat also announced that IBM is joining the Red Hat OpenStack Cloud Infrastructure Partner Network, the availability of the new Red Hat OpenStack Certification, and the launch of the Red Hat Certified Solution Marketplace. The Red Hat Certified Solution Marketplace already includes more than 500 products that have been certified as OpenStack compute (Nova) compatible, from technology leaders – including IBM. IBM’s collaboration with Red Hat and the OpenStack ecosystem is in line with our commitment to give clients the flexibility, cost-effectiveness, and security that is necessary for cloud computing – both now and in the future.
It was clear at the Summit that cloud is on our customers’ roadmaps. Both IBM and Red Hat understand the importance of the cloud and the critical role that Linux and KVM play in the cloud. Whether it is private, public, or hybrid, we know customers have to virtualize to get there – and both IBM and Red Hat are committed to KVM as the virtualization hypervisor.
There were many other high points at this year’s conference as well. In our booth, IBM profiled technology from IBM PureSystems, IBM System x, IBM BladeCenter, IBM Power Systems, and IBM System z, and demonstrated the latest IBM solutions for cloud computing, open virtualization with KVM, and big data. I also had the opportunity to moderate a panel discussion in which representatives from IBM, Red Hat, and the University of Connecticut participated. The discussion focused on common Red Hat Enterprise Virtualization, KVM, and OpenStack use cases and the business benefits that are being realized. I was pleased to see a packed room with the audience asking many more technical questions about KVM than in prior years.
As I left the conference this year, I was struck by the thought that something was very different. Whether customers are discussing the use of KVM in the cloud, or adding it as a second hypervisor for “hyperdiversity,” the debate about whether KVM is technically ready is now over. It has achieved impressive SPECvirt and TPC-C benchmarks, security certifications, and according to IDC, is showing impressive growth in unit shipments. We are no longer explaining what KVM is. Instead, this year, we were able to show a robust portfolio of clients that have realized success with KVM. The conversation around KVM has changed.
Jean Staten Healy - Director, Worldwide Linux and Open Virtualization, IBM
IBM Software Defined 2700052JD4 Tags:  softwaredefined smartercomputing bigdata virtualization cloudcomputing datacenter itinfrastructure sde softwaredefinedenvironmen... 1,392 Visits
Virtualization+IBM 2700039S5C 1,392 Visits
Why IBM SONAS?
As dependencies on today’s enterprise business computing increase, ensuring that applications are highly reliable becomes more critical. Constant outpouring of data by various day to day enterprise business applications has new storage challenges for today’s enterprise business IT environment.
The VMware vSphere makes it simpler and less expensive to provide higher levels of availability for mission-critical enterprise business applications.
The IBM Scale Out Network Attached Storage (SONAS) provides extreme scale out capability, with a globally clustered network-attached storage (NAS) file system built upon IBM General Parallel File System (IBM GPFS).
The IBM SONAS is the best in class storage solution that provides performance, clustered scalability, high availability (HA), and functionality that are the essential demands for enterprise IT virtual infrastructure.
VMware vSphere delivers higher levels of availability with VMware HA and VMware Fault Tolerance (FT) features.
Integrated VMware vSphere and IBM SONAS virtual IT infrastructure, meets the demand of high availability and massive scalability in terms of performance and storage capacity of enterprise IT virtual infrastructure.
Economic challenges drive enterprise business to provide high levels of enterprise business application availability, performance, and extreme scalability while simultaneously achieving greater levels of cost savings and reduced complexity. As a result, data center infrastructure increasingly virtualized because virtualization provides compelling economic, strategic, operational, and technical benefits. Planning a robust, highly-available, and extremely-scalable infrastructure solution for enterprise virtual data center environments hosting mission-critical applications is of utmost importance.
Some of the key aspects of an effective high-availability virtualized infrastructure include:
Operational efficiency and management simplicity
VMware vSphere provides uniform, cost-effective failover protection against hardware and software failures within an enterprise virtualized IT environment with VMware high availability and fault tolerance features.
The traditional NAS filers do not scale to high capacities. When one filer was fully utilized, a second, third, and more filers were installed. As a result enterprise business IT administrators very often find themselves in the managing silos of filers. Capacity on individual filers could not be shared. Some filers were heavily accessed while others were mostly idle.
The SONAS system is available in as small a configuration of 27 terabytes (TB) in the base rack, up to a maximum of 30 interface nodes and 60 storage nodes within 30 storage pods. The storage pods fit into 15 storage expansion racks. The 60 storage nodes can contain a total of 7200 hard disk drives when fully configured using 96-port InfiniBand® switches in the base rack. The SONAS advanced architecture virtualizes and consolidates multiple filers into a single, enterprise-wide file system, which can translate into reduced total cost of ownership, reduced capital expenditure, and enhanced operational efficiency.
Assuming 2 TB disk drives with fully configured SONAS system can scale up to 14.4 petabyte (PB) or raw storage and billions of files in a single large file system. SONAS system can have as few as eight file systems in a fully configured 14.4 PB or as many as 256 file systems. IBM SONAS provides:
Integrating VMware HA and VMware FT technologies and petabyte scale IBM SONAS offers one of the best in class value proposition. The combination of VMware vSphere and petabyte scale IBM SONAS provides a simple and robust high availability solution for planned and unplanned downtime in a virtual enterprise IT data center environments hosting mission-critical applications.
For more information on IBM SONAS powered virtual IT infrastructure please read following technical reports:
Udayasuryan A Kodoly
Virtualization+IBM 2700039S5C 1,419 Visits
Virtualization enables workload optimization by optimizing systems and system management
Optimizing workloads—to meet or exceed target service levels while using the fewest resources possible—is a major goal for the enterprise today.
However, before workloads can be optimized the systems that drive them must be optimized first. And system optimization is exceptionally hard to achieve in a conventional infrastructure, in which services are tied on a one-to-one basis with commodity hardware such as low-end x86 systems. Commonly, such an infrastructure will be idle more than 90 percent of the time—generating costs but not business value. And should more resources be required for an unexpected spike in workloads, those resources may not be available.
Virtualization represents a much better approach, through which workload resources can be shifted in real time wherever they are required, service levels can be enhanced as a result, and both idle time and overall costs can be minimized—essentially, a vision of workload optimization. But realizing this vision via a virtualized infrastructure will also typically mean moving to a new management paradigm.
A modern solution such as IBM Systems Director will be needed in order to consolidate and simplify overall management by tracking status/health levels of different servers and hosts, and by fulfilling everyday tasks such as software provisioning and problem isolation. Systems Director elegantly unifies management across multiple operating systems, IBM server groups and certain non-IBM servers—taking the focus away from the details of the technology per se and turning it toward the optimized utilization of the IT infrastructure in the pursuit of business goals.
Create and manage system pools with IBM Systems Director VMControl Enterprise Edition
Now, IBM has taken the next evolutionary step in system optimization through virtualization management.
IBM Systems Director VMControl Enterprise Edition, a plug-in extension that works within the general Systems Director environment, allows the enterprise to create virtual system pools: groups of virtualized resources (servers, storage and network). Because they can be managed as a single entity, system pools thus function as building blocks that administrators can use to optimize systems more easily, more quickly and more consistently—mitigating business risks by enhancing availability, reducing costs by better linking resource allocation to business demands and driving service levels to new heights.
To see how system pools work, begin with the fact that successful virtualization will almost always require careful management of system images. Images contain the complete software stack of operating system, middleware, applications, data and other elements required for a virtual server; IT organizations will therefore usually have many images created for many business purposes. When the number of images proliferates, management complexity scales as well, and with it, costs and risks. These challenges demand a fast, efficient and consistent solution to manage images, one designed to take advantage of best practices and yet also adjust easily to the unique demands of a particular organization’s context. They also demand a more holistic, comprehensive approach to managing the overall infrastructure, in order to reduce the number and complexity of management tools as much as possible.
VMControl Enterprise Edition, utilized within IBM Systems Director, represents just such a solution. VMControl Image Management features provide a way for managers to capture system images and store them in a library. Subsequently, they can quickly and easily be provisioned to any target virtual system, and even customized with specific elements that may be required, such as drivers or data. This approach delivers a number of significant wins: much more consistent image deployment, improved security, simplified regulation compliance, higher system availability, faster time-to-value for virtual systems and the services they support and, generally, lower costs.
Once provisioned, virtual systems can themselves be clustered and managed as a logical group—a system pool—and dynamically assigned to changing business demands in real time. This is the heart of the system pool concept: extending virtualization across host systems to render resource utilization even more fluid and cost-efficient while reducing management complexity even further. One system pool is simpler, easier and less expensive to manage than a variety of hardware hosts running a variety of virtual servers.
Get transparent updates, automated resilience and minimized downtime
System pools, as managed by IBM Systems Director VMControl, thus represent a great way to optimize systems. Because resources can be even more closely, quickly and easily paired with business demand, waste is minimized, and yet service level targets are invariably hit. The fact that multiple physical hosts are deployed in the pool, each itself running multiple virtual servers, is no longer directly relevant, and managers need only concern themselves with bigger-picture business goals and how well they are being fulfilled holistically.
Many specific benefits will accrue as well. For instance, consider the common business challenge of service outages; these might occur either on a planned basis (in order to carry out firmware or software updates) or an unplanned basis (due to catastrophic, unpredictable system failure). Both situations are substantially improved via VMControl-managed system pools.
Imagine a data center in which system pools have been deployed and in which one of the hardware hosts in a pool has failed. Because that pool can be monitored and managed as a logical whole, failure of one host does not translate into failure of the pool. An administrator can simply shift the services from the failed host (or any group of them) to other virtual systems within that pool or across pools—dramatically decreasing the negative business impact of the failure.
This approach, when combined with policy-driven management tools, can be automated as well. Should monitoring tools detect the failure, conditions of a logical policy will be fulfilled, and the policy will be executed. The service supported by the failed host will be automatically transferred to a healthy system, along with whatever necessary computational resources are required to optimize its workload. At no point will an IT team member be required to take action or even notice the existence of the problem. Overall downtime and costs dramatically fall, and workloads are fulfilled in a far more optimized fashion.
(In fact, if in this scenario, the organization wisely selects best-in-class hardware such as IBM Power Systems hosts, imminent physical failure can be anticipated and reported automatically to VMControl, which can then take appropriate, policy-driven action. In this scenario, the business impact of the hardware failure is zero.)
VMControl can effectively make planned outages a thing of the past as well, because services need not be taken offline for systems to be updated. They can simply be shifted temporarily to another logical location while updates are applied to the original systems and subsequently shifted back. Users and customers need never know, or care, that an update took place at all. Overall service availability and resilience of the data center climbs as a result, and with it user productivity (for internal services) and customer satisfaction and revenues (for external services).
Gradually develop a cloud over time
System pools can also be seen as a logical stage (or building block) in the development of a full cloud computing environment. A cloud represents an even higher level of abstraction in which multiple system pools combine to flexibly and scalably deliver all the necessary resources for optimization across as many business contexts, systems, services and applications as needed, and yet the cloud itself is managed as an integrated, holistic entity.
Not all organizations are prepared to transition to the cloud computing today, though. For those seeking a more gradual migration, at a pace that matches their requirements going forward, IBM Systems Director VMControl Enterprise Edition can make that possible—delivering substantial business wins today and laying the foundation for more tomorrow.
Virtualization+IBM 2700039S5C 1,452 Visits
Linux Journal just released their 2011 Readers’ Choice Awards. I am very pleased to share in this blog that IBM is the winner in the “Best Linux Server Vendor” category, for the second year in a row.
Every year, Linux Journal invites its readership to cast their vote for their favorite Linux vendor. This year, over 20 server vendors were nominated for the “Best Linux Server Vendor” award including Dell, HP and Sun Microsystems. The awards are announced in the December issue of the Linux Journal.
IBM's win in this category is a testament to IBM’s long standing commitment to Linux. Eleven years ago, IBM announced a $1 billion dollar investment in Linux, taking the technology from a successful science project to a major force in business IT. Not only was this a turning point for Linux and the Linux community, it was also a pivotal moment in IBM's history. This investment was one of the first times IBM made a decision to embrace open source software and make it core to our business strategy.
Today that tradition continues. IBM is consistently among the top commercial contributors of Linux code as measured regularly by The Linux Foundation's "Who writes Linux" series. Linux also continues to be a fundamental component of IBM business --embedded deeply in hardware, software, services and internal deployment.
Recently, IBM again showcased it’s longstanding commitment to open source and virtualization, by backing KVM the Linux-based hypervisor. Kernel-based Virtual Machine (KVM) is the next step in the evolution of x86 virtualization technology. KVM is an open source hypervisor that provides enterprise-class performance, scalability and security to run Windows and Linux workloads. KVM provides businesses with a cost-effective alternative to other x86 hypervisors, and enables a lower-cost, more scalable, and open Cloud.
Linux evolved as a leading enterprise OS, thanks to the community of developers, and KVM will evolve for the same reasons. Since KVM is based on Linux, KVM takes advantage of the scheduler, memory management, power management hardware device drivers, platform support, and other features continuously being produced by the thousands of developers in the Linux community. This gives KVM a significant "feature velocity" that other virtualization solutions cannot match.
Celebrate with us as IBM wins this prestigious “Best Linux Server Vendor” Award.
Jean Staten Healy
Director, WW cross-IBM Linux and Open Virtualization
Virtualization+IBM 2700039S5C 1,481 Visits
Ask any CIO over the past two years and they will likely tell you that virtualization has been a top IT project in their organization throughout that time span. The promise (and realization) of greater IT efficiency via server consolidation and capital expense reductions has propelled server virtualization and hypervisors to the enterprise mainstream in adoption. In fact, some clients are now hitting a wall with server virtualization because they have consolidated all of their “low hanging fruit” workloads, such as IT infrastructure, web servers, and email and collaboration. IDC estimates that nearly two-thirds of all IT workloads will be virtualized by the end of the year1. Where do CIOs go from here?
The answer for the vast majority of clients is….more virtualization. As a core IT technology, virtualization has far-reaching, and lesser-known, implications beyond servers and compute. One example of this reach is how the principles of virtualization (i.e., the logical representation of physical resources) can be directly applied to other datacenter elements such as storage and networking for resource consolidation and pooling. Another example is how server virtualization impacts the rest of the datacenter in terms of provisioning resources, optimizing performance and managing service levels. This impact becomes very clear to clients who are beginning their cloud computing journey and haven’t planned ahead with modernizing their datacenter. The bottom line is that IT needs a comprehensive approach to virtualization, to include solutions that span servers, storage, networking, management and application infrastructure. At IBM, we call this approach Advanced Virtualization.
CIOs and IT leaders need to expand their virtualization expectations beyond increasing consolidation and reducing capital costs. Advanced Virtualization yields additional benefits by ensuring cost-effective business availability and transforming application support. These expanded benefits are essential to clients working to virtualize their complicated and mission-critical workloads, such as ERP systems, OLTP applications and business analytics solutions. Leading organizations are virtualizing their entire datacenter in order to maximize business agility, lower operating costs, and divert resources to new projects for innovation.
We think virtualization is more than you’ve probably been led to believe. Please join us on June 20th at 1:30pm ET as Scott Firth, Director of IBM Virtualization Marketing, shows how enterprises are investing in virtualization beyond hypervisors and servers. With over 45 years of virtualization expertise, IBM provides an end-to-end approach to virtualizing your enterprise. See how our clients are profiting from our industry leadership.
IBM Virtualization Strategy Leader, IBM Systems Software
1) IDC Market Analysis Perspective: Worldwide Datacenter Trends and Strategies 2011
Virtualization+IBM 2700039S5C 1,524 Visits
How does your organization define the value of virtualization solutions? Are you calculating your return on Investment (ROI) strictly based upon cost savings? If so, you might be missing out on the true benefits of virtualization.
Let’s rewind for a second. Historically, server virtualization was the first step organizations took in an attempt to save costs. Identifying and eliminating under performing servers in the IT infrastructure helped recapture floor space and reduced costs associated with software licensing, cooling and power. Server consolidation even made it easier for IT administrators to increase their productivity. A reduction in servers and associated management enables a greater focus on projects strategic to the enterprise.
Server virtualization opens the door to efficiency but it is only the beginning of the virtualization journey. If you are satisfied with achieving these very basic results, you might be missing the whole point of virtualization (and the benefit too). Consider the potential benefits of virtualizing the entire enterprise?
We’ll continue this discussion next week by highlighting other ways to drive efficiency through virtualization and give you a glimpse into the software-defined future of the data center.
In the meantime, please share your thoughts with us on how your organization defines the value of virtualization? And, challenges you encounter calculating virtualization ROI.
Lastly, at IBM Pulse 2013, Jacqueline Woods, Vice President Marketing, along with a panel of Industry Analysts and Clients will discuss a more effective way to calculate ROI or the value of comprehensive IT data center. We would welcome hearing your thoughts before and during the session. You can register for Jacqueline’s session (CSM-2390): What Happens When You Add IBM Systems and Expertise to the Software Defined Data Center?”
IBM Software Defined 2700052JD4 Tags:  powerkvm powersystems linux virtualization kvm hypervisor power8 1,535 Visits
When IBM announced the new POWER8 processor and next-generation scale-out Power Systems earlier in 2014, PowerKVM was also introduced. The introduction was notable because it marked the first time IBM was providing open hypervisor technology – beyond the proprietary IBM PowerVM – on Power. By supporting KVM, IBM is removing a potential hurdle in adoption for users familiar with Linux and KVM on x86 to adopt the enhanced hardware platform.
The open source hypervisor KVM (Kernel-based Virtual Machine) offers many advantages in terms of cost, security and simplicity, and as a result, is gaining ground in the enterprise, particularly among organizations that already have Linux servers deployed in their data centers, or are interested in consolidating workloads or building a flexible infrastructure. In addition, this trend is also part of a larger movement toward deploying more than one type of hypervisor in a data center. This has been termed “hyperversity” by the Gabriel Consulting Group
With the rollout now of KVM on Power, IBM is committed to making it as easy as possible for both Power users who have not used Linux or KVM before, as well as existing Linux and KVM users – who are familiar with KVM on x86 or System z but not Power - to gain all the benefits of PowerKVM. In addition to the favorable economics of KVM virtualization, PowerKVM virtualization fully leverages POWER8’s symmetric multi-threading to achieve the highest performance possible with the hardware.
Providing an intuitive web panel with common tools for configuring and operating Linux systems, Kimchi and Ginger are add-on tools that are not required to manage a host or guests, but make the PowerKVM experience more user-friendly. In fact, Kimchi can also be used on x86 systems as well. Kimchi and Ginger for IBM PowerKVM 2.1.0 were released in June 2014.
The bottom line is that Kimchi and Ginger make it easier to adopt PowerKVM whether users have had prior Linux experience or not. That is the whole idea – to make it easier for users who do not have experience with Linux and virtualization to get on board and manage and use virtual machines using PowerKVM.
Aline Fatima Manera, Christy L Norman Perez & Paulo Ricardo Paz Vital
Staff Software Engineers, Open Virtualization, Linux Technology Center, IBM