Skip to main content

By clicking Submit, you agree to the developerWorks terms of use.

The first time you sign into developerWorks, a profile is created for you. Select information in your profile (name, country/region, and company) is displayed to the public and will accompany any content you post. You may update your IBM account at any time.

All information submitted is secure.

  • Close [x]

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerworks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

By clicking Submit, you agree to the developerWorks terms of use.

All information submitted is secure.

  • Close [x]

developerWorks Community:

  • Close [x]

Optimizing IBM DB2 pureScale transaction throughput in virtualized IBM Power Systems

Danny Leung (, Systems Optimization & Performance Analyst, IBM Canada Ltd.
Danny Leung
Danny has been with IBM for five years and is a member of the Systems Optimization Competency Center (SOCC) team based in Toronto. His main area of focus is on the POWER7 architecture running OLTP DB2 pureScale database workloads.
Mala Anand (, PowerVM Performance Lead, IBM Corp.
Photo ofMala Anand
Mala is a Senior Technical Staff Member at IBM Systems and Technology Group System Performance. She leads the Power Virtualization performance testing, focusing on processor and memory virtualization and IBM PowerVM competitive performance. She previously led network and enterprise workload performance efforts on x86 Linux at the IBM Linux Technology Center.

Summary:  Today, hardware virtualization is prevalent throughout the IT industry. Having a single server perform the same amount of work previously done by several, is a great a value proposition for companies of any size. Although there are volumes of documentation available — both online and printed — the vastness of the resources make it difficult to navigate them. This article begins to address that challenge by providing a list of high-level, planning-and-deployment best practices, followed by a series of lower-level optimization steps using a sample database environment running an IBM DB2® pureScale® installation for IBM POWER7® servers.

Date:  12 Jun 2012
Level:  Intermediate PDF:  A4 and Letter (701 KB | 11 pages)Get Adobe® Reader®

Activity:  12358 views


Today, a database server is likely to be implemented on a virtual machine (VM), where processor, memory, disk and network components are allocated from a shared pool of available resources on a physical machine. This provides the ability to consolidate multiple workloads onto a single server. Despite potentially higher upfront costs for larger physical machines, the ability to consolidate many workloads onto a single server can mean more efficient use of system resources, which can help lead to less power consumed, less data center floor space occupied and ultimately, a lower total cost of ownership (TOC).

The nature of this paradigm, however, introduces a coordination mechanism within the physical machine that assigns, maps and manages the physical resources to the virtual hosts, or logical partitions (LPARs) as they are known on IBM Power Systems™ severs. The implementation of this mechanism — usually known as the hypervisor — can provide a robust, scalable and manageable virtualization solution that can help maximize clients' return on investment (ROI).

That hypervisor, part of the IBM PowerVM™ virtualization solution, is able to take LPAR profiles and deploy them onto IBM Power Systems in an efficient and seamless manner. It can virtualize multiple operating systems (OS), has the scalability to deploy up to a thousand VMs on a single server and integrates with a full suite of tooling to plan, deploy and manage the LPARs quickly and easily.

For example, with the IBM PowerVM solution, not only can you set the quantity of resources LPARs receive, but you can also control — from the same interface — how those resources, such as processors and memory, are shared among the other system LPARs. If you have unimportant LPARs that are frequently idling, they would be best implemented with a shared-processing LPAR profile. This means, unused processing resources are deposited back into a shared processing pool for other LPARs to consume. If you have LPARs that are mission-critical, despite having some idle time, you might want to set up a dedicated-processing profile, where processing resources are strictly coupled with that LPAR and no other.

There are volumes of literature available (see the References section below) on virtualization, from short articles to comprehensive books; targeting a variety of scenarios, from specific database and application middleware workloads to generalized host deployment strategies. This article will provide a short list of LPAR planning and deployment best practices, followed by a series of optimization steps using a sample database environment running an IBM DB2® pureScale® installation on IBM POWER7® servers, to demonstrate the advantages of such best practices.

1 of 6 | Next


Zone=AIX and UNIX
TutorialTitle=Optimizing IBM DB2 pureScale transaction throughput in virtualized IBM Power Systems