In late 2010 the major theme of customer conversations was that every company was analyzing ways to save money and trim their IT budget. Those cost savings efforts slashed budgets by about 8.1% according to a 2010 CIO poll. The main method used was virtualization, combining all those UNIX and Windows environments that have sprouted up over the years. The new DB2 release comes along at just at the right time because Version 10 DB2 performance tuning solutions are potentially huge for many of the common designs, practices and DB2 performance tuning problems that are being used today.
DB2 10 Performance Solutions for Too Many DB2 Indexes Many DB2 index designs have far too many separate indexes defined on high performance tables. The DB2 10 Index INCLUDE COLUMNS feature provides the ability to consolidate some of those many indexes for improved DB2 performance. This will provide a CPU reduction for any data modification process and save storage with fewer indexes to manipulate and store.
Even if your design cannot handle removing all the indexes, DB2 10 also improves INSERT performance by using more parallelism. When INSERT SQL modifies a table with multiple indexes, DB2 10 does the pre-fetch of multiple DB2 indexes in parallel. By initiating parallel I/Os for the multiple indexes, the process is not waiting for synchronous I/Os, reducing the overall INSERT processing time. This cuts down the timeframe of possible contention within your system and improves DB2 performance tuning opportunities for all your applications.
DB2 10 Performance Solutions for Access Paths DB2 10 Optimizer SQL improvements are going to help everyone with their DB2 performance tuning efforts and especially applications and DB2 systems that have a large number of list pre-fetch access paths. These access paths are improved because the SQL Optimizer does more processing during the beginning of the data retrieval process Stage 1 SQL evaluation.
The DB2 10 SQL optimizer now evaluates scalar functions and non-matching index predicates during the Stage 1 evaluation of the SQL access path. Data entries that previously waited until the Stage 2 process are done early in the optimization process and dramatically limit the number of qualifying data pages and rows that have to be evaluated. By eliminating these rows early in Stage 1, I/Os and CPU of these additional data entries are not passed into Stage 2, significantly reducing and dramatically improving query elapsed time and overall query performance. This DB2 10 Optimizer SQL performance tuning solution alone will make a huge difference in all of your application’s performance.
DB2 10 Performance Solutions for All Applications
These and other features in DB2 10 provide immediate CPU and cost savings for any company seeking to improve DB2 performance tuning. With additional new SQL, XML and data warehousing features, DB2 10 provides more availability, design options and DB2 performance tuning opportunities for new or existing applications. DB2 10 provides performance solutions and a way for your business to reduce costs and improve performance exactly when your CIO and company need it the most.
Three Things to Do to Retain Your DB2 Query Performance
According to the majority of the DB2 10 Beta Customers, the performance figures for DB2 10 are true and sometimes better than the 5-10% saving right out of the box that is being advertised. But this is not to say that your DB2 10 migration and experience will be as good or better. Previous DB2 version migration horror stories abound and I have helped many clients tune and improve their DB2 system and query performance tuning. Some of these engagements have seen improvements in query performance tuning by correcting DSNZPARM settings that obviously got messed up. Did it happen during a migration, maybe? So here are three things you can do now in DB2 V9 that will help your application retain its current DB2 query performance tuning and get the most out of your DB2 10 system and application.
Gather Current Performance and DB2 Explain Output
To retain your query performance tuning going into DB2 10, you first need to understand and measure your current DB2 9 query performance tuning. Gather performance figures and EXPLAIN output for all your applications. Gather and understand the SQL access and overall processing of the application. Having these statistics before going into a DB2 10 migration is the first step to understand how much query performance tuning improvement your systems and applications are experiencing from the new DB2 10 features.
Leverage DB2 Version 9 BASIC and EXTENDED Plan Stability Features
To retain your query performance tuning leverage DB2 Version 9 BASIC and EXTENDED plan stability features. The DB2 9 BASIC and EXTENDED plan stability features are there to provide an easy way to preserve or fall back to a good package access path. By using this feature you can save off a good access path associated with the EXPLAIN information that was gathered in Step 1. Also by setting up the BASIC and EXTENDED plan stability features any special bind parameters or table/index statistics considerations can be exposed and documented before the migration to DB2 10. In addition the REBIND process will help your system make the transition to all DB2 packages from DB2 plans and bind everything in the current DB2 9. If you are migrating from DB2 Version 8, get all your DB2 Plans to Packages and make a copy of the Packages with a different OWNER or COLLID such as “SAVED” or something obvious. This way you can copy back, include the backup collection or manipulate your application to use these SAVED packages.
Determine Impact of DB2 10
Next, determine whether there is a high, medium or low probability that the DB2 packages will be able to leverage or be influenced by the new DB2 10 features. The DB2 10 improved parallel INSERT into multiple indexes feature will improve elapsed time but not CPU time. The new Stage 1 SQL Optimization improvements will potentially cut both elapsed and CPU time for your applications’ query performance tuning. Determine which of your individual application DB2 packages could see a benefit from the many DB2 10 features. (Go to my DB2 10 White Paper for a complete list of the DB2 10 enhancements.) Analyze your applications and understand which ones will benefit or may have potential issues before your DB2 migration. Use your Plan Stability BASIC and EXTENDED packages to use the best performing access path regardless of whether it is DB2 10 or DB2 9 and you will definitely have success and a good experience once your DB2 10 migration is complete.
Follow these three steps to insure your query performance tuning stays the same, or more likely, improves substantially.
Leverage DB2 Version 9 BASIC and EXTENDED Plan Stability Features
Last week I talked about leveraging the Version 9 DB2 Plan BASIC and EXTENDED plan stability features. I also commented that if you don’t have DB2 9 and are coming from DB2 Version 8 to DB2 10 that you can still save your DB2 access plans. If you are migrating from DB2 Version 8, move all your DB2 Plans to Packages and make copies of the Packages with different OWNERs or COLLIDs such as “SAVED” or something equally obvious. This way you can copy back, include the backup collection or manipulate your application to use these SAVED packages. Keeping the old SQL optimization and access path information is vital.
The following diagrams show how to setup a way save your SQL optimization access path and then back out a module. This worked for me a long time ago Version 3 or 4 and for a client with Version 7, so make sure you test out your setup and back out procedures. Then the process can be quickly executed by operations personnel with a REXX exec or library control process such as Endevor or other change management products. Since a picture is a thousand words below are two charts that illustrate how to set up a simple DB2 PLAN PKLIST access path preservation process.
Gene Fuh and Terry Purcell wrote an article describing this type of DB2 Plan technique and the CURRENT PACKAGESET options for protecting your access paths. "Insurance for Your Access Paths Across Rebinds" for further detailed explanation.
Remember DB2 Plan Stability with DB2 9 and DB2 10 is the best and simplest way to implement a method to protect your SQL Optimizer access paths. There are also software tools on the market for handling more complex backup and recovery situations. Use Version 10 DB2 Plan Stability as soon as possible, but there are several other techniques to protect your access paths and guarantee performance. My DB2 10 information has been put into a collaborative book with information from Roger Miller, Julian Stuhler, Surekha Pureka, DB2 10 for z/OS – Cost Savings ….. Out of the Box.
When getting my taxes ready every year, I review the previous year’s activity. Reviewing 2008 showed that the majority of my consulting was spent fixing and tuning DB2 Java based systems. This is not a big surprise since the trend and majority of my clients since 2005 have had DB2 Java performance tuning opportunities. Even in 2011, the trend of Java applications being built and implemented with DB2 z/OS and LUW continues at an ever-growing pace.
In most of these client situations, the majority of the problem has not been the database or the DB2 LUW or z/OS system; it has been related to the DB2 Java application processing. Java is the new workhorse and many systems are being implemented and not performing well. Working with my clients, I’ve discovered a variety of issues with these systems and over the next several weeks I will highlight the most common factors that kill performance for these new DB2 Java application systems.
Many object framework, architecture and programming pattern options are implemented with these languages. Given that there is no one object framework, architecture or application programming pattern that is right for every situation, there is neither a single right nor wrong way for performance success with your DB2 Java application.
(To be continued next week.)
Over the last three years, my clients have shown that multiple framework, architecture and programming patterns are usually implemented within the same project. The problem is the poor performance lessons experienced from the application implementation are not fully understood and the performance problems are continued and replicated into the next architecture, framework or pattern iteration, including DB2 Java applications.
Each application is different and each service or process within the application is unique. Step back and be flexible in your design patterns to understand that one or two architectures, frameworks or programming patterns are not correct for every situation. Your design should reflect the application requirements and the correct implementation for achieving the best DB2 Java performance might mean a variety or mix of approaches.
Objects within Java are great for flexibility and reuse. Java services and open source products such as Hibernate, iBatis, Ruby and techniques such as Java Persistence Architecture (JPA) and Data Access Objects (DAO) are great for accessing the DB2 database. Many of these techniques are common in today’s DB2 Java applications. My clients have experienced problems with these techniques when the application processing does not pay attention to the transaction integrity or the unit of work properly. When this happens the DB2 Java application processes usually have connected to the database multiple times, processed the transaction too many times or not committed or rolled back the transaction properly. These DB2 Java transaction situations usually manifest themselves in JDBC errors or poor referential integrity issues that developers blame on the database. Unfortunately, it is not the database but the application coding of the services that cause the problems.
In the coming weeks I will talk further about DB2 Java applications, their processing and issues that I have experienced with my clients. I know it will help you avoid some of these problems too.
Given the object model of Java and the relational model of the DB2 database, accessing data properly continues to be difficult for most DB2 Java application developers. Over the history of DB2 Java development, there have been many attempts within vendor products, interfaces and open source projects to bridge this object to relational data chasm. Many object to relational mapping of (ORM) solutions exist but few applications leverage them properly or efficiently.
The leading architectures providing good performance for my DB2 Java clients today, Enterprise Java Bean (EJB) specifications and Java Persistence Architecture (JPA) continue to evolve. Sometimes I have seen the open source Hibernate product implemented by DB2 Java projects looking for a quick database interface, but usually it is leveraged improperly and performs poorly. Sometimes the Hibernate interface even masks or creates problems in DB2 Java systems with its SQL handling and various parameter settings. Some clients have even written their own plain old Java object (POJO) interface to get to their DB2 data.
Any of these ways to get to the data can work and get good performance if the proper application framework, architecture and design pattern is matched with the correct application and transaction type. Working with IMS, IDMS, CICS and MQ systems referencing DB2 shows that very large databases with high availability have a variety of frameworks, architectures and design patterns. Just because an application is using an object oriented language such as Java, C## or .Net does not mean that performance or good database standards and principles should not be implemented or expected. Sometimes the application focuses on making the application fit the framework and more attention should be paid to DB2 Java performance.
One of the first standards and principles neglected in the DB2 Java applications that I have seen is that the application references the database too many times to complete a single transaction. While it is good to use your ORM database interface, the architect and application programmer should know how many times the ORM layer is used during each different type of the transaction. Since these ORM frameworks mask the database as just another object, many programmers do not know when their Java class or web service is firing a SQL call to the database. Within some DB2 Java performance problem systems, I have seen several hundred DB2 Java application calls to the database to complete a single transaction. This level of activity will never provide sub-second DB2 Java application transaction performance.
Comparing these new DB2 Java application database call levels against the other applications within any environment usually shows a substantial increase in overall usage. Sometimes the legacy transactions are only referencing the database 10-25 times while the new DB2 Java applications are referencing the database 130-175 times to complete a single transaction. Database usage during the Agile project development process or new scrum scenarios needs must be highlighted so everyone understands the overall performance requirements and expectations.
Most object oriented applications have their database calls travel through the network, web server and application servers making performance monitoring and evaluation even more difficult. Even though the next buzz word of cloud computing is supposed to cache and make magic of all these transaction performance problems, not even a cloud can make hundreds of calls to the database perform with sub-second response time.
Designing the unit-of-work for a given transaction entails many components. Different techniques and methods are incorporated depending on the components such as Hibernate, iBatis, JPA or Enterprise Bean technology to process the transaction. The Java transaction framework and the object patterns incorporated with the components also affect the transaction unit-of-work. All these factors together provide complete flexibility for today's Java developers.
Unfortunately, with this flexibility comes the responsibility to handle the transaction as effectively and efficiently as possible. For example, the various Hibernate, iBatis, JPA or Enterprise Bean technologies often shield the programmer from the database access. The database object is often passed through several methods or classes before it is used so a number of modifications could have already taken place. This same database object is also the same database SQL table access used for many different of processes and this is typical for the majority of the DB2 Java system reviews done recently.
This same database object is usually a SQL access that is usually a generic SQL call to a single table retrieving all of its columns with minimal WHERE clause filtering to obtain DB2 Java data. Alternatively, the DB2 Java SQL call could be using a unique key to get a single row from the table. In both cases the SQL is fairly simple. When it only has minor WHERE clause filtering, the database access is too generic. When the access is a unique key, the access is usually too fine-tuned to retrieve the group of data desired. Sometimes the DB2 Java processing passes multiple instances of the database unique access object and the Java method processes all of these database objects.
In all these scenarios, the SQL database access within the DB2 Java application does not fit the transaction processing or its unit-of-work. Generic access through the various Java persistence layers usually only provides basic performance for your transaction processing and usually retrieves too many rows for a given transaction. To achieve peak performance the DB2 Java transaction needs to access specific sets of database information and process them quickly. In too many DB2 Java systems this is a rare transaction processing situation.
DB2 Java performance is often a problem because the application processing is emulating the database which executes more efficiently or the processing is poorly designed. Either of these scenarios that my teams have found during performance or design reviews of DB2 Java performance of systems and applications always led to extended I/O activities and excessive CPU usage.
Too often, when the DB2 Java application was designed, the full scope of the eventual implemented processing was unknown. The specifications or even the coding of the backbone framework processing began before everything was known or so many additional processing add-ons were bolted on the transaction that the original design no longer fits the transaction and it no longer performs well. When additional functions are added into the transaction scope many times, the additional data retrievals are not added into the existing SQL processing. Instead they are coded as additional stand-alone SQL calls. This leads to SQL statement after SQL statement being executed during the single transaction of a DB2 Java application.
These add-on transaction functions typically add additional SQL to the transaction unit-of-work. This leads to the DB2 Java system transactions that seem to need hundreds or even thousands of SQL database calls to process their transaction from beginning to end. These large numbers of SQL calls usually touch and lock a large number of tables, inspect the data and finally perform the transaction processing. This situation typically uses excessive CPU and performs a large number of unneeded I/Os.
By not combining or enhancing the existing SQL to retrieve the additional data, the overall number of calls continues to expand and the DB2 Java database performance continues to suffer. The application design is needlessly neglected when these new requirements come along. When the changing requirements result in additional SQL calls with the application itself evaluating or combining new SQL data with an existing object data store, the result is more CPU usage and poor response time.
To avoid these types of situations in your DB2 Java application, understand all the data that is needed by your transaction. The application processing that combines or reevaluates data needs to be pushed back into the existing database SQL statements. DB2 does it much more efficiently. Retrieving additional data is bad for I/O, CPU, locking, and overall performance. So next time your DB2 Java transaction needs additional functionality don’t just add on, integrate your new functions into the existing SQL and designs of your DB2 Java application database processes.
In previous blog entries I have talked about transaction scope, how DB2 Java applications access the database too much and transaction units of work (UOWs) are not really analyzed properly .
Too often these days the design and development of DB2 Java applications are done in an Agile or SCRUM type of project methodology where short concise project deliverables are designed to deliver working transactions. These methodologies are good for transactions but sometime are not good at overall DB2 Java performance. Since the scope of the Agile or SCRUM sessions are individual transactions, the big picture of the overall business and processing objectives sometimes gets lost. This leads to transactions that only accomplish a small discrete piece of the business. Other transactions are necessary and retrieve the same master customer or product information again and again in order to complete the processing activity.
Database caching can mitigate and shield the impact on performance for repeatedly getting the same database information but cannot cache all the activity. When analyzing your various transactions, determine the overall business objectives and flow of your DB2 Java application. Combine standalone transactions or SOA services that use the same data keys as much as possible.