You may have heard a lot about dual core processors recently and how they are licensed in the world's most popular database environments. Microsoft announced in late 2004 that they would only charge a single processor fee for a dual core chip, and since that time, there has been a flurry of debate, projections, and discussions on how vendors should price their software in response to hardware technology changes. Just before the writing of this article, it looked like Oracle made the statement that they will charge on a per core level for their software on any server, thereby taking an exact opposite stance as the strongly marketed Microsoft one.
Since the Microsoft announcement on dual core pricing, despite that fact that no Microsoft software was available on any dual core architected server at that time, more and more clients have been asking me about dual core servers and IBM DB2 Universal Database for Linux™, UNIX®, and Windows® (DB2 UDB) licensing. In addition to this, they've also asked about sub-capacity pricing as they leverage hardware technologies for server consolidation plays.
To make things a little more confusing, different vendors use different or similar terms to mean different things that affect licensing (the point of that sentence was to be confusing). For example, for some, a core is a processor, others regard a processor as the chip (or di) that plugs into a socket, and so on. Add into the fold the ability for hardware servers to create both static and dynamic partitions (I'll call them logical partitions, or LPARs, in this article, but different hardware vendors call these different things), and you're likely to have more questions that need answering.
With all this terminology and marketing hoopla and distractions, it's hard to get a handle on what's going on. In this article, I'll discuss the licensing considerations of DB2 UDB in a dual core environment and give you some insight into this technology and its associated terminology to give you a better understanding on the landscape in general. To wrap up this article, I'll also discuss sub-capacity pricing, a hot topic with today's server consolidation plays.
Quite simply, a core is the circuitry that executes computer commands. A chip , on the other hand, is a silicon wafer that a core resides on. Dual-core refers to a chip design and fabrication capability that results in two processor cores per physical chip (some people go so far as to call it an 'SMP chip'). In contrast, with what is for now your typical workstation (you know, the one that's likely sitting on your desk right now), it's more than likely the case that a single core resides on a single chip.
A single core architecture is shown in Figure 1 below:
Figure 1. A single core processor
In today's marketplace, it's likely a safe bet to assume that from a licensing perspective, everyone refers to the previous figure as one processor. Along the way, there have been performance boosting technologies implemented on the single core architecture to help boost virtual processing speeds or throughput.
For example, Intel's hyper-threading is one such technology that has been shown to boost performance gains up to 30% for some key workloads. Before we go on, let's spend a moment on terminology:
- Threads are simply sequences of instructions. Threading within a processor is a design technique used to improve performance by permitting what appears to be the simultaneous execution of two or more sets of instructions.
- Hyper-threading is the ability to essentially schedule separate thread execution plans in a highly efficient manner.
DB2 UDB supports hyper-threading, and there has never been any additional charges for this technology since it didn't double the performance of the DB2 UDB server overall. (It won't for any database server for that matter; well, I am sure someone on some server with some workload will prove me wrong, but you get the point.) Other performance-based chip enhancements like SMT are supported in DB2 UDB at no additional cost as well.
In October 2000, IBM introduced a new line of highly scalable microprocessors called Power® chips (originally, the Power4 series, which has since been replaced by Power5, with plans for Power6, and so on). These Power chips changed the technology landscape by implementing two cores (so the ability to have two separate circuitry units executing compute instructions) on a single chip (something that plugs into the board). In fact, the Power5 architecture today holds over 50 industry standard and application benchmark records that spawn a broad range of technical and commercial workloads. Soon after, other hardware vendors followed suit, like HP and Sun. Most recently, Intel and AMD have introduced this technology to their chip families as well.
The advantage of dual core chips is that they can be used to drive performance and reduce power requirements as well as the heat generated by higher and higher clock speeds of their single core counterparts. Dual core chips are more dense as well, which means their physical requirements are more along the lines of their single-core cousins since they share the same form factor. Dual core chips would also appear to be more environmentally friendly in that they consume less electricity and cooling (HVAC) requirements. In addition to this, the proximity of cached memory can yield performance advantages too.
With respect to performance, experts have called an end to Moore's Law (the prediction that the processing power of computer chips will double approximately every 18 months) which sparked general industry interest in dual core processors. Most recently, commodity chip manufacturers, like Intel and AMD, announced 2005 products that will feature their first generation of dual core chips to help combat the heat generated by increased processing power of a chip in recent generations.
A dual core architecture is shown in Figure 2 below:
Figure 2. A dual core processor
Not much difference here really. You still have a single piece of silicon that plugs into the socket on your board. However, as previously noted, in this architecture, there are two pieces of circuitry that are used to execute compute instructions.
With the advent of dual core, there's a lot of confusion around the term processor. For example, IBM Power5 servers (and their benchmarks) are officially reported by the number of cores. So a 16-way Power5 SMP server has 16 cores, but only has 8 actual sockets where physical processors would plug into. Last I checked, Sun refers to a dual core chip as a single processor capable of running two threads. A Sun box with 4 dual core processors is called a 4-way SMP machine. In contrast, IBM would call this an 8-way machine (since it has 4 dual core processors which equals 8 cores). So comparing what Sun will call a 4-way box and what IBM calls a 4-way box are different boxes - so be careful. In addition to this, HP refers to a single dual core chip as a processor. So an 8 core HP system is called a 4-way SMP server and to make things more confusing, the terms processor and dual core chip are used interchangeably in their documentation.
SAP and TPC benchmarks do not require hardware vendors to list the number of cores that were used when publishing their results. However, the SPEC benchmark rules (for example, SPECfp_rate 2000) require that results are stated in number of processor cores. For example, the Sun Fire V40z SPECfp_rate result on their 4-way machine is listed under the 8-way results category. This may be a reason as to why IBM has decided to identify the SMP-attributes of a server via cores and not physical processors. There are differences of opinions obviously, but I recommend you look at processor cores when comparing performance results across solutions.
Generally, to the IBM Software Group, a processor is a functional unit within a computing device that interprets and executes instructions. For the most part, with multi-core technology, each core is considered a processor. For example, in a dual core Power5 chip, you would count two processors when licensing your software using a per processor metric (Don't throw this article away yet because some of this has changed, more on that in a bit.)
Today, for Power5-based systems (and other non-IBM high-end processors with dual core architectures - basically anything that isn't an OpenPower 710/720 server or x86 process architecture - namely, Intel/AMD), IBM defines a core as a processor. This seems reasonable since not only can each core execute software independently, but standard performance tests show that the processing power is at near or more than two-fold with these dual core architectures.
For example, consider the DB2 UDB TPC-C 16-way result that IBM announced in July of 2004 (that's almost one year ago as of the writing of this article - a lifetime in the world of benchmarks). The Power5 dual core architecture used for this benchmark yielded some rather impressive results. Specifically, this DB2 UDB benchmark was run on a 16-way IBM eServer p5 570 server and achieved 809,144 tpmC - surpassing competitive results on systems with up to four times more licensed CPU's.
For example, currently SQL Server's best overall TPC-C result was run on a 64-way SMP server and achieved 786,646 tpmC; DB2 used one-quarter of the processors to beat this result. (Remember, with the Power5 dual core architecture, each core is considered a processor, as opposed to each socket. The IBM 16-way result refers to the number of cores, so there are really 8 sockets on this machine. Some vendors would refer to this as an 8-way dual core system, whereas the IBM eServer folks call it a 16-way server because of the way they interpret the term processor.)
This is just one example of the performance characteristics of leading edge dual core servers. In November 2004, IBM released a 64-way TPC-C result that yielded over 400% better performance than the leading SQL Server 64-way result and 270% better performance than Oracle's top TPC-C result with the Real Application Clusters (RAC) technology with 64 processors, and 318% better performance than Oracle's best 64-way SMP TPC-C result with 64 processors. All of these competitor results were on single core architectures.
The point of this discussion isn't to brag about the performance of DB2 UDB (okay, perhaps a little), but rather to illustrate the potential power of a well engineered dual core architecture.
For the reasons illustrated above, IBM Software Group charges per core when it comes to Power5-based, and other non-x86 architecture or OpenPower 710/720 box, servers with dual core architectures. If a Power5 server was sold with 8 physical processors (though it wouldn't be called an 8-way box), you would have buy 16 DB2 licenses (since each processor has two cores). Remember that different vendors have different terminology frameworks here; I am trying to bridge them by illustrating how processor-based products would be licensed by IBM. Because of the fact that an eServer machine may not be sold with the term processor, I map them here for simplicity.
In the 16-way TPC-C result mentioned earlier, when compared to Microsoft SQL Server, you would license 48 fewer CPUs of the DB2 UDB database software and still achieve better performance (8 processors x 2 for dual core licenses on this server = 16 licenses). Not only is that 48 fewer CPUs of the software, but it is also less maintenance and potentially less hardware cost too.
Although not an official statement from IBM, existing Power-based chips, and future chips of any architecture with a very high dual to single core performance ratio will still require two license entitlements. Quite simply, for these chips, IBM continues to license the core, not the chip, for these powerful processor families.
In late 2004, Microsoft announced that it will define a processor to be a chip (the piece of silicon that plugs into the computer motherboard) -- regardless of the number of cores on the chip. This differs from the pricing strategy of this architecture for both IBM and Oracle when it comes to their database servers. There was a lot of press coverage about this announcement, despite that fact that a server wasn't available from any vendor with a dual core architecture that Microsoft SQL Server could run on.
During this time, IBM had a number of opportunities to benchmark the performance of the new x86-based Intel and AMD dual core architectures and hasn't found that their respective first generation technologies produce the type of results found with the Power5's dual core architecture (there are also a number of public benchmarks that will suggest this assertion to be true as well). Rather, the anticipated dual core to single core performance ratio appears to display normal generation-to-generation performance improvements and represents an excellent architectural shift for further performance enhancements that should not be constrained by the power requirements and heat generation limitations discussed earlier.
IBM announced on April 21th, 2005 that for both the x86 Architecture and OpenPower 710/720 platforms, only one IBM middleware software processor license will be required for the two cores on a dual core chip..
IBM made this announcement since our internal benchmarks for these dual core systems didn't match the core to performance ratio of other systems. This announcement, when you think about it, is consistent with the pricing strategy of the past. (Enhancements like hyper-threading which never yielded double - or near double - the performance advantage did not increase a client's per processor costs even when these technologies claimed to enable a server to behave as such.)
Back to our dual versus single core examples. If you had an 8-way SMP server that used dual core AMD chips, you would only need to buy 8 DB2 UDB processor licenses. If this same box was on a dual core Power5 server, you would need to buy 16 DB2 UDB licenses. Again, this pricing seems to maintain value for clients. DB2 UDB on an 8-way Power5 dual core server (remember, this is 8 cores, so 4 sockets) has been shown to demonstrate 429,900 tpmC. This bests the top SQL Server 8-way result on a single core architecture which yielded 175,366 tpmC by 240%.
To summarize, IBM software products that are priced per processor require a user authorization for each processor (as defined by IBM) that is available on the server. This is the approach IBM has consistently used since it originally introduced processor pricing. As of April 21st, 2005, there is an exception for x86-based AMD/Intel dual core processors, as well as those found within the OpenPower 710/720 servers from IBM in that despite each core being a processor to IBM, you only need to license per physcial processor on these boxes.
The recent classification of dual core architectures with respect to their relative performance helps to sustain the value relationship that you receive from software relative to your chosen hardware technology. In essence, this announcement helps you maintain value in the face of advancing hardware technology and moreover links you to a utility-like computing model (how many transactions can you run).
Despite marketing hoopla and FUD (fear, uncertainty, and doubt), DB2 UDB (and IBM middleware for that matter) remain more competitively priced than ever in today's marketplace.
Sub-capacity pricing is another challenge that software companies must address with the new ability for hardware vendors to offer both static and dynamic partitioning at the hardware layer -- thereby creating virtual servers. For example, imagine having a 4-way server and carving it up into 2 separate 2-way servers as shown in Figure 3 below:
Figure 3. Sub-capacity technologies
In the previous figure, you may install your database server on the blue partition (which houses processors 1 and 2) and have your application server software on the red partition (which houses processors 3 and 4). Each partition is like it's own machine: it has its own processors, memory, and so on. In IBM-speak, we call them Logical Partitions (LPARs) when they are static. By static, I mean that the resources allocated to each partition do not change over time. Another form of partitioning is called Dynamic Logical Partitions (DLPARs) which give you the added flexibility of dynamically reallocating compute resources to a partition in response to business demands or policies. Since DB2 UDB V8.1.2, DB2 UDB has supported features for different vendors' hardware partitioning strategies - including dynamic ones.
Both LPARs and DLPARs allow users to address multiple system requirements using a single machine. This can be used to deliver benefits such as: server consolidation, business unit consolidation, and mixed production/test environments. LPARs have long been utilized by those that implement zSeries® technology, but are relatively new to the distributed world.
You should be aware that LPARs by themselves do not provide a significant increase in a system's availability. However, LPARs can be used to complement other availability strategies.
Since each partition is treated as a separate server, you can run a single environment with a single system image on a single partition. This can provide for a more cost efficient solution, since you can allocate resources to departments, database servers, clustering, and so on.
It's clear that sub-capacity technology enables clients to maintain the value of their software in the face of technology changes. Since your costs are more closely aligned with your actual capacity requirements, you only need to pay for additional capacity as application usage grows. This makes it easier to pilot new applications and to support smaller workloads. It also makes it easier for you to optimize your IT environment and design your systems to maximize utilization instead of minimizing software costs. And IBM is the only vendor which supports the advanced server virtualization capabilities found in the IBM eServer p5 and IBM eServer i5 server systems (the ability to take single processors and partition them into multiple run-time environments - even at a fractional processing level).
Taking into account the results on those Power5 servers discussed earlier in this article, one may envision a development environment where a 4-way server is virtualized into 20 servers for development, Q/A, and so on.
On the same day that IBM announced dual core pricing for x86 and OpenPower hardware, IBM officially announced sub-capacity licensing for selected DB2 family offerings that run on UNIX, i5/OS™, or Linux when running in partitions created on supported IBM and non-IBM systems.
You'll note that Windows was not included in the list of products and platforms eligible for official sub-capacity pricing. Unlike other systems, x86 and Itanium-based systems rely exclusively on virtual partitioning technology like EMC's VMWare ESX and GSK servers, or Microsoft's Virtual Server. It is IBM's intent to support these systems in the future when these technologies have the full support of the IBM Tivoli® License Manager for IBM Software product (which is used to track usage under this program - more on this in a bit). If you require this kind of licensing consideration for your DB2 UDB environment in the meantime, contact your IBM representative.
To make it easier for clients to track sub-capacity compliance, IBM also announced a version of IBM Tivoli License Manager (ITLM) for IBM Software, which is a no charge version of ITLM. This version is fully compatible with the existing for-fee version of ITLM, but can only be used to track IBM software offerings under this program. Clients who want to track non-IBM software can substitute the for-fee version in the compliance process. Clients that elect to use sub-capacity licenses need to generate and submit to IBM a quarterly report on their sub-capacity usage.
As of the writing of this article, the DB2 family products that officially support sub-capacity licenses are:
- DB2 UDB Enterprise Server Edition v8.2
- DB2 UDB Data Warehouse Enterprise Edition v8.2
- DB2 Data Links Manager v8.2
- DB2 Net Search Extender v8.2
If you use other DB2 products and want to investigate their usage in a sub-capacity environment, please contact your IBM representative.
When licensing supported DB2 products with supported sub-capacity hardware technology in an LPAR environment (a list of supported hardware platforms is presented at the end of this article), you have to account for the maximum number of processors used in your ITLM reporting period for the DB2 software. In other words, you must purchase DB2 licenses for the maximum number of processors that you will use.
If you're leveraging virtualization engines, you follow the same principal. However, at no time will you ever pay for more processors than are on the server. For example, if you took a 4-way server and created 20 partitions, each with a copy of DB2 UDB, you would only be required to purchase 4 DB2 UDB ESE licenses.
There are technologies that allow you to allocate software execution to a set number of processors (sometimes called processor pinning). This is not hardware partitioning and such isn't afforded the benefits of sub-capacity licensing outlined in this article. In fact, with most of these technologies, the execution of the software may occur on a set of processors, but when the other processors are idle, it will execute instructions there as well.
The following sections details some typical examples using sub-capacity licensing and the total number of processors that would need to be licensed. For simplicity, these examples do not consider the number of cores on the processor.
An LPAR provides the capability to statically isolate compute resources and run independent operating systems of the same, or different, type on the same server. A typical example may include a 12-way server, that uses LPAR technology to create 3 virtual servers. Figure 4 shows 3 LPARs (purple, aqua, and pink) and the respective processors that are assigned to them (8, 2, and 2). In this example, you would need 4 DB2 UDB licenses (2 for DB2 Enterprise Server Edition (ESE) on AIX® and 2 for DB2 ESE on Linux).
Figure 4. A topological view of an SMP machine using sub-capacity technologies with LPARs
As previously mentioned, DLPARs provide the capability to respond to a change in business requirements in real time. This obviously creates some complexity when it comes to licensing any software.
Both the IBM eServer i5 and eServer p5 systems have the capability to run DLPARs. For example, consider the topology shown in Figure 5:
Figure 5. A topological view of an SMP machine using sub-capacity technologies with DLPARs
In this example, you can see the compute resources allocated to DB2 UDB change depending on the business need or the location in the business cycle. In fact, this example has a nice complexity to it in that processors are being dynamically allocated between different operating systems no less!
To identify the number of sub-capacity license entitlements for each product, you have to determine the maximum number of processors each product runs on across the 3 potential configurations in this example. The number of processors running DB2 Enterprise Server Edition in this example varies by the location in the business cycle (peak times, normal operations, or simulation work as the company presumably ramps up to a peak load). The maximum number of processors ever allocated to DB2 in the business cycle is found when running the simulation workload on the bottom of Figure 5. In this mode, a total of 6 license entitlements are required (4 under AIX and 2 under Linux) and therefore at all times, you are required to procure 6 license entitlements for your DB2 software - even though other periods allocate less compute resource to the DB2 software. If you have an environment where you want to more closely correlate these factors, IBM also offers the On Off Capacity on Demand (OOCOD) pricing model.
This example shows how to count licenses when exploiting the virtualization capabilities available on the IBM eServer p5 and IBM eServer i5 systems that allow you to create shared processor pools. In this topology, the processors in the shared pool can be dynamically reallocated based on rules established by the client.
In this advanced sub-capacity scenario, there are some hardware-specific terms that you need to be aware of when trying to figure out how to license your DB2 software. A capped partition (shown in Figure 6 below) is a partition that has a set limit on the number of processors it can use. This set limit is identified by the Processing Value Unit (PrU) metric (which can also be thought of as physical processor equivalents). Another metric, called the Virtual Processor Value Unit (VP) , represents (for capped partitions only) the number of physical processors that can be shared to attain the PrU value. In a capped partition, you are required to license software in accordance to the PrU value for the partition.
Figure 6. Virtualization example
For example, the capped partition E has PrU = 2 and VP = 4. This means that the DB2 workload on this partition will run on the equivalent of 2 processors, but those two processors are virtualized to run on portions of the processors in the processor pool up to a maximum of 4 physical processors (for example, if the system borrowed 50% of processor power from 4 processors in the pool, resource allocation on that system would look like: 0.5 + 0.5 + 0.5 + 0.5 = 2.0). In Figure 6, you can see that partition E is a capped partition that spawns from a shared pool of nine processors. Since partition E is capped with a PrU of 2, this means that at no time can partition E ever use more than 2 processors from this pool (although these two processors may be the combination of fractions of up to 4 processors as previously noted). In this example, partition E needs 2 DB2 processor entitlements.
In contrast, an uncapped partition can leverage extra processing power from the shared pool when it is available for use up to an amount identified by the VP metric. Uncapped partitions are typically reserved for higher priority workloads that can benefit from the extra horsepower available from extra compute capacity when portions of the server are under-utilized. For an uncapped partition, the PrU unit is the minimum number of processors assigned to the partition at all times, while the VP is the maximum number of processors it will try to acquire from the pool if they are available. For an uncapped partition, the VP metric acts a little differently since it is assumed that uncapped partitions will use whole processors when available. In other words, while with a capped partition VP represents the number of processors that can deliver the power defined by the PrU metric, in an uncapped partition, it just means the maximum number of physical processors allocated to the partition. In an uncapped partition, you are required to license software in accordance to the VP value for the partition.
For example, in Figure 6, partition H will have a minimum of one processor assigned to it, and will acquire up to 3 processors if they become available. In this case, for DB2, you'll need 3 DB2 licenses since VP=3. Remember, in the uncapped partition, the attainment of VP=3 is done by whole processors and not fractional components as in the capped partition example.
So let's now summarize the example on Figure 6. How many DB2 licenses will you need to correctly license this server for your DB2 software? The answer is 8. Is that what you expected? If not, consider:
- 1 processor for partition B
- 4 processors for the capped partitions E and F ((PrU E = 2) + (PrU F = 2) = 4) which are capped at an aggregate compute power of 2 processors each (though they can each accumulate the power equivalency of 2 processors from up to 4 processors in the pool).
- 3 processors for the uncapped partition H (VP=3) since despite starting with one processor as its compute power baseline (PrU=1), the partition will attempt to acquire 3 processors of computer power from the processor pool
The PrU and VP limits are set by the client in conjunction with IBM. Remember, a client is never required to purchase more license entitlements than the number of processors in the system. (A corollary to this is that the number of licenses required within a shared pool is capped at the number of processors within the shared pool.)
This final example shows how to count licenses when exploiting the micro-partitioning capabilities available on the IBM eServer p5 and IBM eServer i5 systems. Micro-partitioning allows a single processor to be subdivided into portions of hundredths of a processor from the processor pool. The example that follows in Figure 7 is somewhat similar to Figure 6 as the process of counting the required license entitlements is the same. However, you need to understand and account for the benefits of micro-partitioning when determining the number of DB2 licenses you need.
Figure 7. A Micropartitioning example
The simple rule of thumb is that you aggregate your required licenses in the same fashion you did in Figure 6, taking into account the nuances with capped and uncapped partitions, and then round up any fractional component to the next whole number.
One final thing to thing to note in this environment before turning to our example. In an uncapped partition processor allocations are always in whole values. In other words, an uncapped partition can never have a fractional component (like 2.5) as its processor allocation (either the baseline or the target - this is why you only see whole numbers in partitions G and H.)
So let's now summarize the example in Figure 7. How many DB2 licenses will you need to correctly licensing this server? The answer is 7. Is that what you expected? If not, consider:
- 1 processor for partition B
- 3 processors (1.6 + 1.3 = 2.9 rounded up = 3) for the capped partitions E and F which are capped at an aggregate compute power of 1.6 and 1.3 processors respectively (though they can each accumulate the power equivalency of their assigned processing power from up to 4 processors in the pool)
- 3 processors for the uncapped partition H (VP=3) since despite starting with one processor as its compute power baseline (PrU=1), the partition will attempt to acquire 3 processors of computer power from the processor pool
The PrU and VP limits are set by the client in conjunction with IBM. Remember, a client is never required to purchase more license entitlements than the number of processors in the system.
As previously mentioned, the IBM Tivoli License Manager for IBM Software will be installed and run by all customers implementing sub-capacity license entitlements. In fact, IBM is the first vendor to supply a tool for customers to leverage for software compliance. Customers only need to run one version of the ITLM server, but may run additional copies if their deployment warrants. After registering sub-capacity offerings, ITLM will monitor the use of all sub-capacity licenses via their part numbers and will record the monthly high water mark which represents the maximum copies of the sub-capacity supported product run concurrently. In subsequent quarters, clients need to generate a report, review it, and then submit it electronically to IBM. If the use level is within your entitlement level, then nothing needs to be done and you go on with operations, submitting the next use report the following quarter. If your use level is above your entitlement, a quote will be generated for you by IBM with the required additional license entitlements.
The following table shows the hardware technologies that officially support sub-capacity pricing with DB2 UDB today. If your hardware isn't supported, please contact your IBM representative. For the latest supported hardware configurations and DB2 editions and products that support sub-capacity licensing, see the IBM Passport Advantage Web site.
|Hardware||Operating system||Partition technology||Sub-capacity eligible|
|RHEL 3 u3||LPAR|
|Sun||Solaris 8||Dynamic System Domains||Yes|
|Solaris 9||Dynamic System Domains|
|Solaris 10||Dynamic System Domains|
The information in this article is submitted as a best effort basis as the author understands it and does not represent an official communication from IBM. Neither IBM nor the author is responsible for the information in this article.
Required TPC information
The following is required information when referencing TPC-C benchmarks. Refer to the TPC-C Web site for complete details.
Note: References to licenses and benchmark are current as of the publication date of this article. Neither IBM or the author are responsible for changes made by competing vendors, or new benchmarks during the course of business.
- DB2 V8.2 on IBM eServer pSeries 595 (64 Power5+ 1900MHz); 3,210,540 tpmC, $5.19/tpmC; Available: 05/14/2005
- DB2 V8.1 on IBM eServer p5 570 (16 Power5 1900MHz); 809,144 tpmC, $4.95/tpmC; Available: 09/30/2004
- Microsoft SQL Server 2000 Enterprise Ed. 64-bit on HP Superdome (64 Intel Itanium2 1500MHz); 786,646 tpmC, $6.49/tpmC; Available: 10/23/2003
- Microsoft SQL Server 2000 Enterprise Ed. 64-bit on Bull NovaScale 5080 C/S (8 Intel Itanium2 1500MHz); 175,366 tpmC, $4.53/tpcC; Available 06/30/04
- DB2 V8.1 on IBM eServer p5 570 (8 Power5 1900MHz); 429,899 tpmC, $4.99/tpmC; Available: 09/30/2004
- Oracle 10g on HP Integrity rx5670 Cluster 64Pl; 1,184,893 tpmC, $5.52/tpmC; Available 04/30/2004
- For more details, read the Dual Core pricing announcement and the
Sub-capacity licensing announcement.
Paul C. Zikopoulos, BA, MBA, is an award-winning writer and speaker with the IBM Database Global Sales Support team. He has more than nine years of experience with DB2 and has written numerous magazine articles and books about it. Paul has co-authored the following books: DB2 Version 8: The Official Guide, DB2 - The Complete Reference, DB2 Fundamentals Certification for Dummies, DB2 For Dummies, and A DBA's Guide to Databases on Linux. Paul is a DB2 Certified Advanced Technical Expert (DRDA and Cluster/EEE) and a DB2 Certified Solutions Expert (Business Intelligence and Database Administration). You can reach him at firstname.lastname@example.org