Make sure that you are using the correct database design to improve DB2 performance:
Use application programming and SQL techniques correctly
- Use partitioning for robust tables that play central parts in the processing to help ease locking and encourage parallelism. Also leverage database partitioning for large tables (over 500,000 rows) which might benefit from spreading out the I/O to different partitions and different devices.
- Use database compression for those databases that have a large row sizes and a majority of read access profiles.
- Do you have any simple tablespaces that should be changed to segmented tablespaces for better DB2 performance? Are your segmented tablespaces properly sized?
- Do you have frequently updated tables with a large number of indexes? Double check to see if any of these indexes can be eliminated to improve DB2 performance. Having extra indexes over busy tables can be a huge DB2 performance overhead. Deleting two or three extra indexes can sometimes cut one-quarter or one-half of a second off your transaction response time. So make sure you only have the indexes that are being used defined on your tables.
- It is important to drive processing to the SQL engine as much as possible. Review applications in your company for:
- Lack of SQL Joins
- SQL Sub queries
- Table expressions
- CASE expressions
- Limited fetch
- Review applications in your company for:
- Excessive use of tablespace scans
- Excessive use of list prefetch and index scans
- Use the new DB2 pureQuery for DB2 Java applications to improve DB2 performance, security and statically bind the application access path within your environment.
- Do you have too many locks against your tables? How many lock escalations happened today? The answer to these questions lies in looking at the order of your application processing and your programming methodologies so that concurrent transactions can occur easily.
- Data performance reviews need to be exhaustive in order to achieve all the possible savings possible. Data access reviews are an important part of these reviews. Also:
- Analyze the types of access paths that are active in your environment.
- Review the number of invalid plans and packages
- Check the use of table compression for large tables with frequent Select statements run against them.
Dave Beulke is an internationally recognized DB2 consultant, DB2 training and DB2 education instructor.Dave helps his clients improve their strategic direction, dramatically improve DB2 performance and reduce their CPU demand, saving millions in their systems, databases and application areas within their mainframe, UNIX and Windows environments. Search for more of Dave Beulke’s DB2 Performance blogs on DeveloperWorks and at www.davebeulke.com.
Leverage DB2 Version 9 BASIC and EXTENDED Plan Stability Features
Last week I talked about leveraging the Version 9 DB2 Plan BASIC and EXTENDED plan stability features. I also commented that if you don’t have DB2 9 and are coming from DB2 Version 8 to DB2 10 that you can still save your DB2 access plans. If you are migrating from DB2 Version 8, move all your DB2 Plans to Packages and make copies of the Packages with different OWNERs or COLLIDs such as “SAVED” or something equally obvious. This way you can copy back, include the backup collection or manipulate your application to use these SAVED packages. Keeping the old SQL optimization and access path information is vital.
The following diagrams show how to setup a way save your SQL optimization access path and then back out a module. This worked for me a long time ago Version 3 or 4 and for a client with Version 7, so make sure you test out your setup and back out procedures. Then the process can be quickly executed by operations personnel with a REXX exec or library control process such as Endevor or other change management products. Since a picture is a thousand words below are two charts that illustrate how to set up a simple DB2 PLAN PKLIST access path preservation process.
Gene Fuh and Terry Purcell wrote an article describing this type of DB2 Plan technique and the CURRENT PACKAGESET options for protecting your access paths. "Insurance for Your Access Paths Across Rebinds" for further detailed explanation.
Remember DB2 Plan Stability with DB2 9 and DB2 10 is the best and simplest way to implement a method to protect your SQL Optimizer access paths. There are also software tools on the market for handling more complex backup and recovery situations. Use Version 10 DB2 Plan Stability as soon as possible, but there are several other techniques to protect your access paths and guarantee performance. My DB2 10 information has been put into a collaborative book with information from Roger Miller, Julian Stuhler, Surekha Pureka, DB2 10 for z/OS – Cost Savings ….. Out of the Box.
In late 2010 the major theme of customer conversations was that every company was analyzing ways to save money and trim their IT budget. Those cost savings efforts slashed budgets by about 8.1% according to a 2010 CIO poll. The main method used was virtualization, combining all those UNIX and Windows environments that have sprouted up over the years. The new DB2 release comes along at just at the right time because Version 10 DB2 performance tuning solutions are potentially huge for many of the common designs, practices and DB2 performance tuning problems that are being used today.
DB2 10 Performance Solutions for Too Many DB2 Indexes Many DB2 index designs have far too many separate indexes defined on high performance tables. The DB2 10 Index INCLUDE COLUMNS feature provides the ability to consolidate some of those many indexes for improved DB2 performance. This will provide a CPU reduction for any data modification process and save storage with fewer indexes to manipulate and store.
Even if your design cannot handle removing all the indexes, DB2 10 also improves INSERT performance by using more parallelism. When INSERT SQL modifies a table with multiple indexes, DB2 10 does the pre-fetch of multiple DB2 indexes in parallel. By initiating parallel I/Os for the multiple indexes, the process is not waiting for synchronous I/Os, reducing the overall INSERT processing time. This cuts down the timeframe of possible contention within your system and improves DB2 performance tuning opportunities for all your applications.
DB2 10 Performance Solutions for Access Paths DB2 10 Optimizer SQL improvements are going to help everyone with their DB2 performance tuning efforts and especially applications and DB2 systems that have a large number of list pre-fetch access paths. These access paths are improved because the SQL Optimizer does more processing during the beginning of the data retrieval process Stage 1 SQL evaluation.
The DB2 10 SQL optimizer now evaluates scalar functions and non-matching index predicates during the Stage 1 evaluation of the SQL access path. Data entries that previously waited until the Stage 2 process are done early in the optimization process and dramatically limit the number of qualifying data pages and rows that have to be evaluated. By eliminating these rows early in Stage 1, I/Os and CPU of these additional data entries are not passed into Stage 2, significantly reducing and dramatically improving query elapsed time and overall query performance. This DB2 10 Optimizer SQL performance tuning solution alone will make a huge difference in all of your application’s performance.
DB2 10 Performance Solutions for All Applications
These and other features in DB2 10 provide immediate CPU and cost savings for any company seeking to improve DB2 performance tuning. With additional new SQL, XML and data warehousing features, DB2 10 provides more availability, design options and DB2 performance tuning opportunities for new or existing applications. DB2 10 provides performance solutions and a way for your business to reduce costs and improve performance exactly when your CIO and company need it the most.
feature that is in the new Version 9.7 of DB2 is its new compatibility with
Oracle. This compatibility takes many forms within the database such as
understanding operational syntax, stored procedures keyword syntax and improved
operational concurrency that I covered in this DB2 Performance blog.
compatibility with Oracle was developed because many clients are migrating from
Oracle to DB2 for its improved DB2 performance, lower licensing costs, ease of
use and lower total cost of ownership (TCO). All of these factors are very
compelling given the current economic times for cost savings, consolidation of
servers and lower CPU and energy costs of processing within DB2 as compared to
Handles Oracle’s Non-Standard Names
Many of the
operational aspects of standard databases such as the data definition language
(DDL), data manipulation language (DML) and structured query language (SQL) and
data control language (DCL) are standard across databases such as SQL Server
Oracle and DB2 Family. Over the years Oracle has called parts of these DDL, DML
and DCL, non-ISO standard names, to perform operations. DB2 has added these
unique Oracle non-standard operational keywords to its lexicon of syntax to
perform these operational activities. The most popular is the Oracle TRUNCATE
statement that is used everywhere to clear the contents of a table.
and Oracle Stored Procedures Compatibility
Another major item
is stored procedures compatibility because they are utilized everywhere. Again
with all the standard programming languages such as Java C++ C# and others,
over the years the syntax was controlled by the ISO and other standards
committees. The various DBMS vendors took liberties with their stored procedure
languages do perform certain processes. Again DB2 has added these Oracle syntax
items so that any developer that is familiar with Oracle will be able to
quickly and effectively code for any of these DBMSs. This one feature is
critical for many companies because all companies have multiple different DBMS
vendors and have a programming staff that needs to write stored procedures on
all of them.
to DB2 Migration Success
I recently worked
with a client during the DB2 Version 9.7 Cobra program and found that the
syntax for both the operational aspects and the stored procedures is about 95%
there for general conversions from Oracle to DB2. Working with the dedicated
client staff, some of my consultant associates and the IBM Cobra people, we
were able to migrate a vendor product from Oracle to DB2 Cobra and then take
the DB2 code from the Linux, UNIX and Windows platform and run the application
on DB2 for z/OS environment. Given these Oracle to DB2 compatibilities, the
client is positioning to leverage the lowest TCO in the industry of DB2 on z/OS
with a vendor and their solution. This is remarkable because they didn’t even
understand the mainframe concepts only three months ago.
Another Oracle to
DB2 migration project done by a friend of mine was completed in only three
days, so the conversion work can be done very quickly and you can become the
next hero lowering your company’s total costs within IT. Migrate to DB2 today
and save your company money, it might even save your job or your coworker’s
Dave Beulke is an
internationally recognized DB2 consultant, DB2 training and DB2 education
instructor.Dave helps his clients improve their strategic direction,
dramatically improve DB2 performance and reduce their CPU demand, saving
millions in their systems, databases and application areas within their
mainframe, UNIX and Windows environments. Search for more of Dave Beulke’s DB2
Performance blogs on DeveloperWorks and at www.davebeulke.com.
When getting my taxes ready every year, I review the previous year’s activity. Reviewing 2008 showed that the majority of my consulting was spent fixing and tuning DB2 Java based systems. This is not a big surprise since the trend and majority of my clients since 2005 have had DB2 Java performance tuning opportunities. Even in 2011, the trend of Java applications being built and implemented with DB2 z/OS and LUW continues at an ever-growing pace.
In most of these client situations, the majority of the problem has not been the database or the DB2 LUW or z/OS system; it has been related to the DB2 Java application processing. Java is the new workhorse and many systems are being implemented and not performing well. Working with my clients, I’ve discovered a variety of issues with these systems and over the next several weeks I will highlight the most common factors that kill performance for these new DB2 Java application systems.
Many object framework, architecture and programming pattern options are implemented with these languages. Given that there is no one object framework, architecture or application programming pattern that is right for every situation, there is neither a single right nor wrong way for performance success with your DB2 Java application.
(To be continued next week.)
With this post we are going to delve deeper into problems and recommendations to improve DB2 performance tuning and DB2 security.
DB2 Performance Discoveries and Recommendations
The operating system, CICS and DB2 systems were operating efficiently and managed well within the overall environment. As the number of transactions and systems continue to grow, the systems will require more CPU and performance tuning attention. As the number of different diverse workloads, the software packages and number of transaction increases the amount of time and resources to manage and tune the efficiency of the systems will continue to increase regardless of the platform, database or application.
DB2 CICS RDO Thread Reuse
The connection between DB2 and CICS is very important. These connections settings are handled through the RDO interface that has many DB2 thread reuse and security settings. Based on my experience, these settings can have a dramatic impact on performance and CPU consumption.
The settings were reviewed for the DB2 workload transactions running through the systems CICS environments. These environments averaged approximately over a million transactions per day and had extensive DB2 workloads.
The current CICS and DB2 connection settings did not fully leverage the performance settings that were available to improve connection security, thread and DB2 package or plan reuse. My recommendation was that the settings should be monitored and updated monthly to reflect the transaction workload and dedicated threads defined for many of the larger volume CICS DB2 transactions.
Given that the company has just begun implementing these RDO settings and improvements, there were still opportunities for RDO tuning through the priority, security and plan settings. The dedicated threads were recommended to be run at the normal priority setting, with a security group profile that allows users to reuse their DB2 plan execution authorizations. These settings along with the number of threads allocated, protected and use in the CICS DB2 Pool needed to be fined tuned for each of the environments.
Reviewing and improving these settings through the CICS Statistics DB2 Connection reports that are buried inside the overall CICS Statistics report provides great details on the transactions and thread usage. Changing these RDO settings had an immediate positive effect by reducing CPU consumption. The company could save more by fine tuning all their RDO settings.
DB2 Security Improvements
The DB2 environment had security settings that were used in conjunction with the security system, Top Secret. The company was using both DB2 and Top Secret security extensively, causing additional system-checking overhead. DB2 and Top Secret should be configured to leverage the use of secondary authorization security verification. The use of secondary authorization security checking is common at many large government and banking clients and should be implemented as soon as possible.
By using secondary security, the system checking security is done once and reused because of the grouping that allows CICS transaction threads to be reused. This will save the extra system overhead of double-checking security for every transaction execution and save CPU.
- If your system is heavily dependent on CICS transactions:
- Make sure that you are reviewing the CICS Statistics DB2 Connections reports on a regular basis.
- Update the RDO settings so that the DB2 workload is using mainly dedicated and reused threads.
- Insure that your RDO settings use a minimum of pool threads.
- Make sure that your workload never has to wait for a thread to execute work in your DB2 CICS environment.
- Make sure that you aren’t duplicating security checking with both DB2 and another security product.
- Use group ids for your DB2 authorizations as much as possible so that plan, package and CICS thread authorizations can be reused.
Dave Beulke is an internationally recognized DB2 consultant, DB2 trainer and education instructor. Dave helps his clients improve their strategic direction, dramatically improve DB2 performance and reduce their CPU demand saving millions in their systems, databases and application areas within their mainframe, UNIX and Windows environments.
Modified by DaveBeulke
As I referenced in an earlier DW post, DB2 Version 9 for z/OS introduced over 50+ great new performance features. Many of these enabled companies to reduce CPU demand on system and application environments.
One of the features introduced in Version 9, selecting modified data, helps your applications minimize the number of SQL calls and the number of trips to the DB2 system. This feature provides the ability to get data values with SQL SELECTs clauses wrapped around your standard UPDATE, INSERT, MERGE and DELETE SQL statements.
By providing for the ability to perform an SQL SELECT from an UPDATE, INSERT, MERGE or DELETE SQL statement, the application is able to retrieve many important values previously retrieved in additional SQL statements. By retrieving these data values when the UPDATE, INSERT, MERGE or DELETE SQL statement is executed, the application can get data values for ROWIDs, identity, sequence, timestamp columns and other values or defaults in the database.
What is also great about using a SQL statement to select the modified data is that the application can get the data values of columns modified by BEFORE INSERT triggers. By getting these column data values, your application can use them in other parts of the application immediately. This is especially valuable for maintaining referential integrity relationships that need the values of ROWIDs, identity or sequence column values.
The new feature of selecting modified data or SQL SELECT from an UPDATE, INSERT, MERGE or DELETE SQL statements, is very powerful also because it can eliminate a large amount of network traffic and additional SQL calls. Processes and their related SQL that were done in multiple calls to DB2 can now be done in a single call. Additionally before selecting modified data SQL enhancement, these old multiple SQL calls sometimes had to transmit large chunks of data back and forth across the network. Now all that extra data transmission traffic is eliminated.
By implementing this powerful feature of selecting modified data or SQL SELECT from an UPDATE, INSERT, MERGE or DELETE SQL statement, you can dramatically reduce the number of SQL Calls, reduce your network traffic, and improve your application response time while reducing your overall CPU demand
DB2 offers application designers new functionality for their data warehousing requirements. The new DB2 10 Temporal Tables provide a way to have a snapshot in time of the status of customers, orders or any other type of business situation.
DB2 Temporal Tables, with their built in functionality, automatically understand the business time or system time of the data entered into the system. This functionality is ideal for handling and documenting the condition of the any business aspect at a certain time. This functionality is driven from two new column definitions, BUSINESS_TIME and SYSTEM_TIME, defined within a table definition. Using these new time period columns within a DB2 Temporal Table definition provides a system-maintained, a period-maintained or bi-temporal time period for your data.
Many systems today have manual processes or utilities that manage or migrate their real time data to history tables. The new DB2 Temporal Tables with their new system time and business time columns can be used in conjunction with a user-defined trigger to automatically migrate transactional temporal table data to another user defined HISTORY table. Having these facilities built into the database greatly improves regulatory compliance, operations and overall DB2 performance tuning.
Separating out the real time transaction data versus the old data within your database using the HISTORY table requires planning and design steps. The separation of the old data from new data guarantees application and SQL performance does not suffer when your database is fully populated. Separation of the old and new data also helps DB2 performance tuning management so more resources can be delegated to maintaining base new transaction data where DB2 performance tuning matters for business operational success.
Over the coming weeks I will go through the steps and design decisions required to set up a Temporal Table. We will go through the SYSTEM_TIME, BUSINESS_TIME and a bi-temporal table design.
To figure out the best temporal table design aspects you need to think of the various options and considerations that will affect its performance. The most important aspect for your temporal table is the answers that your applications or users are expecting from it. The best way is to figure out the time aspect that the application is trying to capture. Are your applications looking for the financial value, the insurance coverage level, enrollment status, customer value or something else?
The temporal table status can be contingent on two types of settings: business time or the system processing time. If the processing is delayed and the system time is later than expected, does that affect your temporal table status? Or are you using the temporal table in a real time scenario where either the business or system time will affect the meaning of the data? There are many ways to respond to the situations and questions, but the design decision should be based on the application and user questions that need to be answered. So it is best to test both SYSTEM_TIME and BUSINESS_TIME scenarios out and see which design provides the best answers with the best performance.
The next design point is to figure out your timestamp type. Do your temporal table application answers require distinct timestamps throughout the system? Your DB2 10 system now has new capabilities to provide a column that is unique within the table system wide. This DB2 syntax is defined WITHOUT OVERLAPS and can be used for your temporal table only for your BUSINESS_TIME values. After the temporal table is created, an index is defined for it using your unique columns and the BUSINESS_TIME WITHOUT OVERLAPS keyword. BUSINESS_TIME is the only option the WITHOUT OVERLAPS keyword works with.
When BUSINESS_TIME WITHOUT OVERLAPS is specified, the columns of the BUSINESS_TIME period must not be specified as part of the constraint. The specification of BUSINESS_TIME WITHOUT OVERLAPS adds the following to the constraints:The end column of the BUSINESS_TIME period in ascending orderThe start column of the BUSINESS_TIME period in ascending orderThe minimum value of a TIMESTAMP(12), the value is 0001-01-01-00:00:00.000000000000The maximum value of a TIMESTAMP(12), the value is 9999-12-31-24:00:00.000000000000
For DATE the minimum is 0001-01-01 and the maximum value is 9999-12-31.
A system generated check constraint named DB2_GENERATED_CHECK_CONSTRAINT_FOR_BUSINESS_TIME is also generated this definition process to ensure that the value for end-column-name is greater than the value for start-column-name. BUSINESS_TIME WITHOUT OVERLAPS must not be specified for a PARTITIONED index.
There are a number of considerations when creating your DB2 10 temporal table. When your application needs it to be unique, the system wide the BUSINESS_TIME option provides the capabilities with some cautions. Check out other posts on temporal tables via Developer Works or at my site, www.davebeulke.com.
Well the votes are in, 22 billion rows is big enough data. It’s not the billions of web logs rows of a Google or Facebook but its big enough for everyone. One of the comments that struck me was that one in a million happens 22,000 times. So whatever your criterion is for big data, it is more a state of mind about the amount of data as opposed to the actual terabyte amounts or the number of rows. Regardless of what database systems you work with, big is a relative term. Just ask your SQL Server, Oracle and Sybase DBA friends what they consider a big system. Usually the answer is nowhere near what you get for DB2 z/OS or even DB2 LUW systems. I talked about this a last year in my blog ("Performance is Relative").
Other comments and questions received about last week’s blog asked for more clarification on the idea of keeping a database design simple. So below are three different ways to keep your big data data warehouse design simple.
First: There are reasons Bill Inmon’s and Ralph Kimball’s decentralized and centralized data warehouse ideas are so popular, those design patterns work. Design patterns for all types of IT applications, Java/.NET MVC (model view controller), various business models and standard processes have been extensively analyzed and endorsed over many years through the design pattern books, conferences and government studies. The decentralized and centralized data warehouse design patterns work and your design should use them for your data warehouse performance. Big data or not, there is no reason to do something more complex. Starting with these types of design patterns, using and optimizing simple Fact table(s) surrounded by Dimension tables(s) design pattern will provide you data warehouse performance. Decentralize or extend these design patterns with as many Fact tables and slow moving Dimension tables will optimize and minimize the amount of big data referenced in typical transaction and your data warehouse performance won’t be an issue.
Second: Make sure to normalize your big data design. It’s typical to try to consolidate everything within a data warehouse performance design. Unfortunately having too many elements in a table forces too much data into an application transaction and data warehouse performance can suffer. Just as decentralized and centralized data warehouse performance design patterns have been used for years, database table normalization has been around for even longer because it logically optimizes your design. The database design normalization process has been documented everywhere over the years and it is effective for making sure the table elements relate to the whole database table key(s). Combining table keys or designs causes excessive repeating data or data groups and over-normalization leads to excessive application joins. Normalization is striking a balance and no one does it perfectly the first time. Normalize your data warehouse performance design several times and your transaction performance can strike a balanced performance for all critical applications.
Third: Design, test, redesign test and repeat. Schedule enough database and application meetings and testing time to understand all the application transactions and reports. Data warehouse performance and modeling big data can get unwieldy, so testing your design early is vital. Sometimes big data table population numbers cause tools to abort. Cut the number of zeros down and model all your application transactions against your database design, build it and run applications against it. Data warehouse performance requires real life testing and actual running of the code or SQL that interfaces with the design. No one has any time to do it perfectly but everyone will be mad if you have to redesign and do it all over right before implementation. Know your performance before production through thorough testing. Big data and data warehouse performance requires design and testing. Make sure to do these several times during your development with as much big data as possible.
These are only some of the simple truths that insure your data warehouse performance for your big data system is a success.
Within the zEnterprise announcement in July 2010 there were several exciting
comments about the new query performance within the IBM Smart Analytics
Optimizer. This smart system leverages the integrated platform, providing
industry leading scalability for data warehouse and business intelligence
Queries that scanned entire terabytes of data that were previously avoided
can now be executed in seconds through the IBM Smart Analytics Optimizer
processing. This capability provides data mining features for quickly getting
answers to the clustering, associations, classification and predictions within
your data warehousing environment.
Within some of the documentation, IBM testing of the Smart Analytics
Optimizer shows huge performance boosts. Queries that scanned and then
subsequently optimized sometimes showed an improvement of 54 times, the cost of
the query 711 times. This type of improvement is very exciting and especially
what new data warehousing and business intelligence workloads need on the
Check out all the
zEnterprise documentation—especially the information on the IBM Smart Analytics
Optimizer. This was just the beginning of a resurgence of the new mainframe
that supports any mainframe, UNIX or windows workload. Known as a company’s
“private cloud,” it allows IT departments to leverage tremendous performance
improvements while reducing query cost substantially.
During the 2010 IBM Z Summit road show, there were several presentations detailing the mainframe platform advantages over UNIX and Windows platforms such as the lowest total cost of ownership, the best availability and unparalleled scalability. These presentations cut through the rumors with detailed facts and figures of different platform configurations. Download these presentations and distribute them to your management for a little reminder why the mainframe continues to be the best platform for your enterprise applications.
The Windows and UNIX platforms proponents always discount and minimize the total cost of ownership, availability and scalability topics. It is our duty to periodically remind management of the extra costs of these UNIX and Windows systems with their huge power consumption costs, software license fees, and software maintenance costs of working with several hundred or thousands of disparate systems. The mainframe quietly continues to process the majority of the transactions at the Fortune 500 companies and everyone, especially younger management types who think the world can run on an iPhone, needs to understand that the System Z infrastructure is the best backbone for any company.
The System Z mainframe is also evolving since it now has specialized processors such as the IFL, zIIP and zAAP to reduce overall operational and licensing costs. These specialty processors, along with the smaller configuration of the System Z offer a single small platform that can consolidate any number of UNIX workloads into a single footprint with a smaller greener energy footprint and better licensing configuration.
The presentations detail benchmarks, licensing fees and labor costs of various mainframe versus UNIX platform configurations. The figures show it sometimes takes double the number of processor cores on a UNIX configuration to start to scale out a configuration. Even more UNIX processors are required to achieve transaction rates that are still only performing one-fourth of what the mainframe System Z executes. These UNIX systems are also dedicated to the production transaction environment with no thought of supporting testing, QA or failover facilities that have yet to be priced or considered, features that come standard within the System Z environment.
System Z also continues to grow because of its faster chips. Ask any PC or UNIX platform personnel “what platform has the fastest clock speed processors” and you will quickly find out who keeps up with the industry information. The chip clock speeds of the System Z and other IBM platforms have improved like the rest of the PC industry. In fact, the System Z z10 chip operates at 4.4 GHz and comes in a 64-way quad core configuration that can speed up any application performance problem. This is almost twice as fast as the HP Superdome processors and a third faster than the Intel Nehalem chips.
So the mainframe continues to lead the industry. Does your management know the cost savings and performance figures of System Z? Tell them and show them the presentations before someone tries to “replace the mainframe” again with a more troublesome, power hunger, bad performing clustered iPhone configuration.
Over the last three years, my clients have shown that multiple framework, architecture and programming patterns are usually implemented within the same project. The problem is the poor performance lessons experienced from the application implementation are not fully understood and the performance problems are continued and replicated into the next architecture, framework or pattern iteration, including DB2 Java applications.
Each application is different and each service or process within the application is unique. Step back and be flexible in your design patterns to understand that one or two architectures, frameworks or programming patterns are not correct for every situation. Your design should reflect the application requirements and the correct implementation for achieving the best DB2 Java performance might mean a variety or mix of approaches.
Objects within Java are great for flexibility and reuse. Java services and open source products such as Hibernate, iBatis, Ruby and techniques such as Java Persistence Architecture (JPA) and Data Access Objects (DAO) are great for accessing the DB2 database. Many of these techniques are common in today’s DB2 Java applications. My clients have experienced problems with these techniques when the application processing does not pay attention to the transaction integrity or the unit of work properly. When this happens the DB2 Java application processes usually have connected to the database multiple times, processed the transaction too many times or not committed or rolled back the transaction properly. These DB2 Java transaction situations usually manifest themselves in JDBC errors or poor referential integrity issues that developers blame on the database. Unfortunately, it is not the database but the application coding of the services that cause the problems.
In the coming weeks I will talk further about DB2 Java applications, their processing and issues that I have experienced with my clients. I know it will help you avoid some of these problems too.
In July 2010 IBM announced the zEnterprise with its new integrated IBMPOWER7
and IBM System x blades environments. This provides users with the ability to
run traditional mainframe applications alongside any UNIX and Intel based
applications. These new capabilities provide a one-stop cloud environment for
businesses to deploy and scale any application from any platform.
In August 2010 IBM announced the new deployment options for the DB2
pureScale system to the IBM System x environments. The ability to leverage the
DB2 pureScale environment provides the same multi-member DB2 data sharing
mainframe capabilities within a new SUSE Linux based solution. Testing shows
that the DB2 pureScale environment performs and scales almost linearly, like
the z/OS DB2 Data Sharing mainframe environment.
Providing these DB2 Data Sharing type capabilities within the System x
architecture provides another great performance and scalability option within
the DB2 LUW family. The DB2 pureScale option with its tremendous scalability
provides a great expansion option for existing DB2 LUW systems that are running
out of capacity within their single computing environment footprint. And now
with the ability to deploy them to the zEnterprise, companies that have DB2 LUW
systems can get the reliability and performance management of the mainframe
This DB2 pureScale on System x also provides another DB2 LUW open solution for companies that need to
consolidate UNIX or Intel based environments and applications. So regardless of
the application or system requirements, the DB2 family provides the most open
cost efficient options. Check out the full IBM DB2
pureScale announcement at IBM.
I received some great comments and
questions based on the blog entry/video about the many new features in
DB2 Cobra. One of the more interesting questions was asked “How much work or
how compatible is the new DB2 with Oracle?”The answer DB2 is very compatible,
so compatible in fact, that many people have migrated from Oracle to DB2 in only
So what are the reasons or justification
for management to consider changing their database from Oracle to DB2? The top
four reasons I have heard recently are:
contract pricing continues to go up. Every time we look Oracle wants more money
because we had to add CPU cores to get application performance.
performance for our BI OLAP Oracle work does not respond. We always have to
rewrite the SQL or create new indexes to make it work. When we tried it on DB2,
their optimizer automatically rewrites the SQL avoiding the manual SQL rewrite,
improving application performance and overall response time.
disaster recovery and HADR feature is much better than any Oracle’s stand-by or
their fail-over solution. Now that the application is compatible with DB2 there
is no reason to risk the business because of an Oracle failure.
IT budget money is paramount these days and with our Oracle database, we
continue to spend money on expanding disk requirements. DB2 data compression
and the new index compression provide a way to reduce disk costs, improve
database performance and minimize backup and recovery while reducing disk
requirements and saving money.
Converting your database from Oracle to DB2 is very easy. You can save
money for your company, your IT department, and maybe your job by migrating
Dave Beulke is an
internationally recognized DB2 consultant, DB2 training and DB2 education
instructor. Dave helps his clients improve their strategic direction,
dramatically improve DB2 performance and reduce their CPU demand, saving
millions in their systems, databases and application areas within their
mainframe, UNIX and Windows environments. Search for more of Dave Beulke’s DB2
Performance blogs on DeveloperWorks and at www.davebeulke.com.