Leverage DB2 Version 9 BASIC and EXTENDED Plan Stability Features
Last week I talked about leveraging the Version 9 DB2 Plan BASIC and EXTENDED plan stability features. I also commented that if you don’t have DB2 9 and are coming from DB2 Version 8 to DB2 10 that you can still save your DB2 access plans. If you are migrating from DB2 Version 8, move all your DB2 Plans to Packages and make copies of the Packages with different OWNERs or COLLIDs such as “SAVED” or something equally obvious. This way you can copy back, include the backup collection or manipulate your application to use these SAVED packages. Keeping the old SQL optimization and access path information is vital.
The following diagrams show how to setup a way save your SQL optimization access path and then back out a module. This worked for me a long time ago Version 3 or 4 and for a client with Version 7, so make sure you test out your setup and back out procedures. Then the process can be quickly executed by operations personnel with a REXX exec or library control process such as Endevor or other change management products. Since a picture is a thousand words below are two charts that illustrate how to set up a simple DB2 PLAN PKLIST access path preservation process.
Gene Fuh and Terry Purcell wrote an article describing this type of DB2 Plan technique and the CURRENT PACKAGESET options for protecting your access paths. "Insurance for Your Access Paths Across Rebinds" for further detailed explanation.
Remember DB2 Plan Stability with DB2 9 and DB2 10 is the best and simplest way to implement a method to protect your SQL Optimizer access paths. There are also software tools on the market for handling more complex backup and recovery situations. Use Version 10 DB2 Plan Stability as soon as possible, but there are several other techniques to protect your access paths and guarantee performance. My DB2 10 information has been put into a collaborative book with information from Roger Miller, Julian Stuhler, Surekha Pureka, DB2 10 for z/OS – Cost Savings ….. Out of the Box.
When getting my taxes ready every year, I review the previous year’s activity. Reviewing 2008 showed that the majority of my consulting was spent fixing and tuning DB2 Java based systems. This is not a big surprise since the trend and majority of my clients since 2005 have had DB2 Java performance tuning opportunities. Even in 2011, the trend of Java applications being built and implemented with DB2 z/OS and LUW continues at an ever-growing pace.
In most of these client situations, the majority of the problem has not been the database or the DB2 LUW or z/OS system; it has been related to the DB2 Java application processing. Java is the new workhorse and many systems are being implemented and not performing well. Working with my clients, I’ve discovered a variety of issues with these systems and over the next several weeks I will highlight the most common factors that kill performance for these new DB2 Java application systems.
Many object framework, architecture and programming pattern options are implemented with these languages. Given that there is no one object framework, architecture or application programming pattern that is right for every situation, there is neither a single right nor wrong way for performance success with your DB2 Java application.
(To be continued next week.)
To figure out the best temporal table design aspects you need to think of the various options and considerations that will affect its performance. The most important aspect for your temporal table is the answers that your applications or users are expecting from it. The best way is to figure out the time aspect that the application is trying to capture. Are your applications looking for the financial value, the insurance coverage level, enrollment status, customer value or something else?
The temporal table status can be contingent on two types of settings: business time or the system processing time. If the processing is delayed and the system time is later than expected, does that affect your temporal table status? Or are you using the temporal table in a real time scenario where either the business or system time will affect the meaning of the data? There are many ways to respond to the situations and questions, but the design decision should be based on the application and user questions that need to be answered. So it is best to test both SYSTEM_TIME and BUSINESS_TIME scenarios out and see which design provides the best answers with the best performance.
The next design point is to figure out your timestamp type. Do your temporal table application answers require distinct timestamps throughout the system? Your DB2 10 system now has new capabilities to provide a column that is unique within the table system wide. This DB2 syntax is defined WITHOUT OVERLAPS and can be used for your temporal table only for your BUSINESS_TIME values. After the temporal table is created, an index is defined for it using your unique columns and the BUSINESS_TIME WITHOUT OVERLAPS keyword. BUSINESS_TIME is the only option the WITHOUT OVERLAPS keyword works with.
When BUSINESS_TIME WITHOUT OVERLAPS is specified, the columns of the BUSINESS_TIME period must not be specified as part of the constraint. The specification of BUSINESS_TIME WITHOUT OVERLAPS adds the following to the constraints:The end column of the BUSINESS_TIME period in ascending orderThe start column of the BUSINESS_TIME period in ascending orderThe minimum value of a TIMESTAMP(12), the value is 0001-01-01-00:00:00.000000000000The maximum value of a TIMESTAMP(12), the value is 9999-12-31-24:00:00.000000000000
For DATE the minimum is 0001-01-01 and the maximum value is 9999-12-31.
A system generated check constraint named DB2_GENERATED_CHECK_CONSTRAINT_FOR_BUSINESS_TIME is also generated this definition process to ensure that the value for end-column-name is greater than the value for start-column-name. BUSINESS_TIME WITHOUT OVERLAPS must not be specified for a PARTITIONED index.
There are a number of considerations when creating your DB2 10 temporal table. When your application needs it to be unique, the system wide the BUSINESS_TIME option provides the capabilities with some cautions. Check out other posts on temporal tables via Developer Works or at my site, www.davebeulke.com.
DB2 offers application designers new functionality for their data warehousing requirements. The new DB2 10 Temporal Tables provide a way to have a snapshot in time of the status of customers, orders or any other type of business situation.
DB2 Temporal Tables, with their built in functionality, automatically understand the business time or system time of the data entered into the system. This functionality is ideal for handling and documenting the condition of the any business aspect at a certain time. This functionality is driven from two new column definitions, BUSINESS_TIME and SYSTEM_TIME, defined within a table definition. Using these new time period columns within a DB2 Temporal Table definition provides a system-maintained, a period-maintained or bi-temporal time period for your data.
Many systems today have manual processes or utilities that manage or migrate their real time data to history tables. The new DB2 Temporal Tables with their new system time and business time columns can be used in conjunction with a user-defined trigger to automatically migrate transactional temporal table data to another user defined HISTORY table. Having these facilities built into the database greatly improves regulatory compliance, operations and overall DB2 performance tuning.
Separating out the real time transaction data versus the old data within your database using the HISTORY table requires planning and design steps. The separation of the old data from new data guarantees application and SQL performance does not suffer when your database is fully populated. Separation of the old and new data also helps DB2 performance tuning management so more resources can be delegated to maintaining base new transaction data where DB2 performance tuning matters for business operational success.
Over the coming weeks I will go through the steps and design decisions required to set up a Temporal Table. We will go through the SYSTEM_TIME, BUSINESS_TIME and a bi-temporal table design.
Well the votes are in, 22 billion rows is big enough data. It’s not the billions of web logs rows of a Google or Facebook but its big enough for everyone. One of the comments that struck me was that one in a million happens 22,000 times. So whatever your criterion is for big data, it is more a state of mind about the amount of data as opposed to the actual terabyte amounts or the number of rows. Regardless of what database systems you work with, big is a relative term. Just ask your SQL Server, Oracle and Sybase DBA friends what they consider a big system. Usually the answer is nowhere near what you get for DB2 z/OS or even DB2 LUW systems. I talked about this a last year in my blog ("Performance is Relative").
Other comments and questions received about last week’s blog asked for more clarification on the idea of keeping a database design simple. So below are three different ways to keep your big data data warehouse design simple.
First: There are reasons Bill Inmon’s and Ralph Kimball’s decentralized and centralized data warehouse ideas are so popular, those design patterns work. Design patterns for all types of IT applications, Java/.NET MVC (model view controller), various business models and standard processes have been extensively analyzed and endorsed over many years through the design pattern books, conferences and government studies. The decentralized and centralized data warehouse design patterns work and your design should use them for your data warehouse performance. Big data or not, there is no reason to do something more complex. Starting with these types of design patterns, using and optimizing simple Fact table(s) surrounded by Dimension tables(s) design pattern will provide you data warehouse performance. Decentralize or extend these design patterns with as many Fact tables and slow moving Dimension tables will optimize and minimize the amount of big data referenced in typical transaction and your data warehouse performance won’t be an issue.
Second: Make sure to normalize your big data design. It’s typical to try to consolidate everything within a data warehouse performance design. Unfortunately having too many elements in a table forces too much data into an application transaction and data warehouse performance can suffer. Just as decentralized and centralized data warehouse performance design patterns have been used for years, database table normalization has been around for even longer because it logically optimizes your design. The database design normalization process has been documented everywhere over the years and it is effective for making sure the table elements relate to the whole database table key(s). Combining table keys or designs causes excessive repeating data or data groups and over-normalization leads to excessive application joins. Normalization is striking a balance and no one does it perfectly the first time. Normalize your data warehouse performance design several times and your transaction performance can strike a balanced performance for all critical applications.
Third: Design, test, redesign test and repeat. Schedule enough database and application meetings and testing time to understand all the application transactions and reports. Data warehouse performance and modeling big data can get unwieldy, so testing your design early is vital. Sometimes big data table population numbers cause tools to abort. Cut the number of zeros down and model all your application transactions against your database design, build it and run applications against it. Data warehouse performance requires real life testing and actual running of the code or SQL that interfaces with the design. No one has any time to do it perfectly but everyone will be mad if you have to redesign and do it all over right before implementation. Know your performance before production through thorough testing. Big data and data warehouse performance requires design and testing. Make sure to do these several times during your development with as much big data as possible.
These are only some of the simple truths that insure your data warehouse performance for your big data system is a success.
Within the zEnterprise announcement in July 2010 there were several exciting
comments about the new query performance within the IBM Smart Analytics
Optimizer. This smart system leverages the integrated platform, providing
industry leading scalability for data warehouse and business intelligence
Queries that scanned entire terabytes of data that were previously avoided
can now be executed in seconds through the IBM Smart Analytics Optimizer
processing. This capability provides data mining features for quickly getting
answers to the clustering, associations, classification and predictions within
your data warehousing environment.
Within some of the documentation, IBM testing of the Smart Analytics
Optimizer shows huge performance boosts. Queries that scanned and then
subsequently optimized sometimes showed an improvement of 54 times, the cost of
the query 711 times. This type of improvement is very exciting and especially
what new data warehousing and business intelligence workloads need on the
Check out all the
zEnterprise documentation—especially the information on the IBM Smart Analytics
Optimizer. This was just the beginning of a resurgence of the new mainframe
that supports any mainframe, UNIX or windows workload. Known as a company’s
“private cloud,” it allows IT departments to leverage tremendous performance
improvements while reducing query cost substantially.
During the 2010 IBM Z Summit road show, there were several presentations detailing the mainframe platform advantages over UNIX and Windows platforms such as the lowest total cost of ownership, the best availability and unparalleled scalability. These presentations cut through the rumors with detailed facts and figures of different platform configurations. Download these presentations and distribute them to your management for a little reminder why the mainframe continues to be the best platform for your enterprise applications.
The Windows and UNIX platforms proponents always discount and minimize the total cost of ownership, availability and scalability topics. It is our duty to periodically remind management of the extra costs of these UNIX and Windows systems with their huge power consumption costs, software license fees, and software maintenance costs of working with several hundred or thousands of disparate systems. The mainframe quietly continues to process the majority of the transactions at the Fortune 500 companies and everyone, especially younger management types who think the world can run on an iPhone, needs to understand that the System Z infrastructure is the best backbone for any company.
The System Z mainframe is also evolving since it now has specialized processors such as the IFL, zIIP and zAAP to reduce overall operational and licensing costs. These specialty processors, along with the smaller configuration of the System Z offer a single small platform that can consolidate any number of UNIX workloads into a single footprint with a smaller greener energy footprint and better licensing configuration.
The presentations detail benchmarks, licensing fees and labor costs of various mainframe versus UNIX platform configurations. The figures show it sometimes takes double the number of processor cores on a UNIX configuration to start to scale out a configuration. Even more UNIX processors are required to achieve transaction rates that are still only performing one-fourth of what the mainframe System Z executes. These UNIX systems are also dedicated to the production transaction environment with no thought of supporting testing, QA or failover facilities that have yet to be priced or considered, features that come standard within the System Z environment.
System Z also continues to grow because of its faster chips. Ask any PC or UNIX platform personnel “what platform has the fastest clock speed processors” and you will quickly find out who keeps up with the industry information. The chip clock speeds of the System Z and other IBM platforms have improved like the rest of the PC industry. In fact, the System Z z10 chip operates at 4.4 GHz and comes in a 64-way quad core configuration that can speed up any application performance problem. This is almost twice as fast as the HP Superdome processors and a third faster than the Intel Nehalem chips.
So the mainframe continues to lead the industry. Does your management know the cost savings and performance figures of System Z? Tell them and show them the presentations before someone tries to “replace the mainframe” again with a more troublesome, power hunger, bad performing clustered iPhone configuration.
Over the last three years, my clients have shown that multiple framework, architecture and programming patterns are usually implemented within the same project. The problem is the poor performance lessons experienced from the application implementation are not fully understood and the performance problems are continued and replicated into the next architecture, framework or pattern iteration, including DB2 Java applications.
Each application is different and each service or process within the application is unique. Step back and be flexible in your design patterns to understand that one or two architectures, frameworks or programming patterns are not correct for every situation. Your design should reflect the application requirements and the correct implementation for achieving the best DB2 Java performance might mean a variety or mix of approaches.
Objects within Java are great for flexibility and reuse. Java services and open source products such as Hibernate, iBatis, Ruby and techniques such as Java Persistence Architecture (JPA) and Data Access Objects (DAO) are great for accessing the DB2 database. Many of these techniques are common in today’s DB2 Java applications. My clients have experienced problems with these techniques when the application processing does not pay attention to the transaction integrity or the unit of work properly. When this happens the DB2 Java application processes usually have connected to the database multiple times, processed the transaction too many times or not committed or rolled back the transaction properly. These DB2 Java transaction situations usually manifest themselves in JDBC errors or poor referential integrity issues that developers blame on the database. Unfortunately, it is not the database but the application coding of the services that cause the problems.
In the coming weeks I will talk further about DB2 Java applications, their processing and issues that I have experienced with my clients. I know it will help you avoid some of these problems too.
In July 2010 IBM announced the zEnterprise with its new integrated IBMPOWER7
and IBM System x blades environments. This provides users with the ability to
run traditional mainframe applications alongside any UNIX and Intel based
applications. These new capabilities provide a one-stop cloud environment for
businesses to deploy and scale any application from any platform.
In August 2010 IBM announced the new deployment options for the DB2
pureScale system to the IBM System x environments. The ability to leverage the
DB2 pureScale environment provides the same multi-member DB2 data sharing
mainframe capabilities within a new SUSE Linux based solution. Testing shows
that the DB2 pureScale environment performs and scales almost linearly, like
the z/OS DB2 Data Sharing mainframe environment.
Providing these DB2 Data Sharing type capabilities within the System x
architecture provides another great performance and scalability option within
the DB2 LUW family. The DB2 pureScale option with its tremendous scalability
provides a great expansion option for existing DB2 LUW systems that are running
out of capacity within their single computing environment footprint. And now
with the ability to deploy them to the zEnterprise, companies that have DB2 LUW
systems can get the reliability and performance management of the mainframe
This DB2 pureScale on System x also provides another DB2 LUW open solution for companies that need to
consolidate UNIX or Intel based environments and applications. So regardless of
the application or system requirements, the DB2 family provides the most open
cost efficient options. Check out the full IBM DB2
pureScale announcement at IBM.
I received some great comments and
questions based on the blog entry/video about the many new features in
DB2 Cobra. One of the more interesting questions was asked “How much work or
how compatible is the new DB2 with Oracle?”The answer DB2 is very compatible,
so compatible in fact, that many people have migrated from Oracle to DB2 in only
So what are the reasons or justification
for management to consider changing their database from Oracle to DB2? The top
four reasons I have heard recently are:
contract pricing continues to go up. Every time we look Oracle wants more money
because we had to add CPU cores to get application performance.
performance for our BI OLAP Oracle work does not respond. We always have to
rewrite the SQL or create new indexes to make it work. When we tried it on DB2,
their optimizer automatically rewrites the SQL avoiding the manual SQL rewrite,
improving application performance and overall response time.
disaster recovery and HADR feature is much better than any Oracle’s stand-by or
their fail-over solution. Now that the application is compatible with DB2 there
is no reason to risk the business because of an Oracle failure.
IT budget money is paramount these days and with our Oracle database, we
continue to spend money on expanding disk requirements. DB2 data compression
and the new index compression provide a way to reduce disk costs, improve
database performance and minimize backup and recovery while reducing disk
requirements and saving money.
Converting your database from Oracle to DB2 is very easy. You can save
money for your company, your IT department, and maybe your job by migrating
Dave Beulke is an
internationally recognized DB2 consultant, DB2 training and DB2 education
instructor. Dave helps his clients improve their strategic direction,
dramatically improve DB2 performance and reduce their CPU demand, saving
millions in their systems, databases and application areas within their
mainframe, UNIX and Windows environments. Search for more of Dave Beulke’s DB2
Performance blogs on DeveloperWorks and at www.davebeulke.com.
Modified by DaveBeulke
Previously, I talked about the first alphabetic group of DB2 data warehouse DSNZPARMS that can improve your access paths and overall application performance. This week the second set of DSNZPARMS are discussed. Many of the data warehouse DSNZPARMS discussed are somewhat hidden within the regular DSNZPARM install panels. All of these DSNZPARMS discussed are available in DB2 for z/OS DB2 Version 9. Some are available in DB2 Version 8.
Caution needs to be taken with all system settings and especially these data warehouse DSNZPARMS. These DSNZPARMS are meant to change access paths and improve them, but each data warehouse design is unique, along with each application access path, so results will vary. If the data warehouse DB2 subsystem is shared with other OLTP or operational applications, I highly recommend fully documenting and setting up a full PLAN STABLITIY plan and package management structure for your current access paths before changing any DSNZPARMS. This documentation along with a good PLAN STABILITY DB2 plan and package management implementation and back out practices helps you quickly react to your environment and back out any detrimental access paths encountered through unexpected rebind of any program.
Some of the comments from previous blogs on data warehouse applications highlighted the resurgent of data warehousing on the z/OS platform and why running a data warehouse on z/OS provides many advantages over other platforms. One that was noted from several people is when your data warehouse runs on z/OS, the huge ETL processes don’t usually have to transmit the data over a network. Even though the network bandwidth is robust, avoiding this extra bottleneck can sometimes save hours of overhead, guaranteeing that your refresh data jobs have enough time every day to provide critical refreshes of you data within your data warehouse. Additionally, most of your source data warehouse data comes from the z/OS operational systems and can quickly be put into operational business intelligence data warehouses. This fresh data increases sales, provides real time inventory or product availability updates and, most importantly, removes latency for all your critical single point master data source of record for the enterprise.
Improve your system and application performance by adjusting these data warehouse DSNZPARMS to improve your access paths and by using the superior DB2 optimizer technology and most efficient performance available.
To get Part 2 of the DB2 V9 DSNZPARM settings, click here.
he DB2 10.5 “Cancun Release” was previously available as DB2 LUW Version 10.5 Fix Pak 4.
The short list of prerequisites and summary DB2 10.5 “Cancun Release” features list is here at the IBM DB2 10.5 “Cancun Release” Fix Pak Knowledge Center website.
Within the release there are many important features, and some of the most interesting ones are related to the new DB2 Explain information. New Explain columns provide more information on the DB2 SQL access path attributes used within our applications.
There are 14 new columns which have many values to support a variety of DB2 optimizer access path information. These are highlighted below, along with a description of what the new Explain provides for our debugging and performance research.
Provides an indicator related to whether the SQL Explain output was built from SQL that was reused. This can be important to identify SQL that could be reused from your system statement cache, be resident for a long time, and not the new refreshed version that was just pushed into the test environment.
Indicates whether the sort process is used to buffer the result set information as it is retrieved from the database.
• BYDPART and RANDACC
Help identify and describe the Zig Zag Join access path. These columns indicate if the DB2 access part is partition dependent which is especially important for pureScale systems. The RANDACC Explain parameter highlights whether the Zig Zag Join uses a TEMP table to facilitate and improve random access.
Indicates that XML data may be moved between DPF partitions during the execution of the SQL.
Indicates that the access path references the index block entries within a Multi-Dimensional Clustered (MDC) index to FETCH the result set data. This is similar to index-only access on a standard table.
Indicates DB2 SQL optimization error(s) occurred during the parsing or applying of SQL Profiles, a new feature. By defining specialized SQL optimization profiles, access paths can be improved or standardized for access-path-challenged tables. Sometimes by applying this profile information, the parsing or optimization can encounter an error. This column, along with the diagnostic messages, can help debug the error situation.
Tells whether the bind option for skipping locked data during the SQL execution is enabled. Skipping locked data can be troublesome especially in robust systems with detailed result set requirements. Be careful and monitor this for accounting and other critical result set data applications.
• PLANID, STMTID and EXECUTID
Help uniquely identify the SQL statements for the DB2 plan, statement, and the execution id used for the Explain. These unique identifiers are very important due to the new ability to set up execution profiles which the optimizer can use to favor critical DB2 access paths for better performance.
• BUSTSENS and SYSTSENS
Indicate whether the SQL could be impacted by the value of the CURRENT TEMPORAL SYSTEM_TIME special register. This is very important since SQL can have specific DB2 temporal table business time or system time requirements. The temporal special register and the process’s SQL temporal requirements now provide more information about the flexibility for using the business or system temporal time.
Indicates the DB2 access path isolation level used for the SQL against a specific table. Since the isolation level can be adjusted through a number of mechanisms, this Explain column provides the information indicating whether it is Uncommitted Read, Read Stability, Cursor Stability, or Repeatable Read.
All of these new Explain columns in the new DB2 10.5 “Cancun Release” provide more critical information about the development of DB2 SQL access paths by the DB2 optimizer. Knowing how the DB2 optimizer is treating your SQL is always better for getting the best DB2 SQL performance possible.
Dave Beulke is a system strategist, application architect, and performance expert specializing in Big Data, data warehouses, and high performance internet business solutions. He is an IBM Gold Consultant, Information Champion, and President of DAMA-NCR, former President of International DB2 User Group, and frequent speaker at national and international conferences. His architectures, designs, and performance tuning techniques help organization better leverage their information assets, saving millions in processing costs.
Make sure that the interface with CICS works well to improve performance:
- Validate the settings between your DB2 CICS transaction configurations. Make sure the maximum workload from one TOR, AOR or DOR is not overwhelming another CICS processing partner.
- If your system is heavily dependent on CICS transactions:
- Make sure that you are reviewing the CICS Statistics DB2 Connections reports on a regular basis.
- Update the RDO/RCT settings so that the DB2 workload is using mainly dedicated and reused threads.
- Insure that your RDO/RCT settings use a minimum of pool threads.
- Make sure that your workload never has to wait for a thread to execute work in your DB2 CICS environment.
- Handle DB2 security correctly to improve DB2 performance:
- Make sure that you aren't duplicating security checking with both DB2 and another security product.
- Use group ids for your DB2 authorizations as much as possible so that plan, package and CICS thread authorizations can be reused.
- Review DB2 bufferpool activity to improve DB2 performance:
- A DB2 bufferpool should be sized correctly for its activity. Make sure to put data objects and index objects in different bufferpools. Data objects with similar access characteristics should be defined to the same bufferpools.
- EDM pools maintain enough free space.
- RID pool is adequately sized and RID pool overflows are not happening in your environment.
- Is your Dynamic Statement Cache Pool set to the appropriate size for your system? Are your settings encouraging SQL caching?
- Check the health of your database objects to improve DB2 performance:
- Are there DB2 tablespaces and tables that need to be reorganized? Check for a large amount of data set extents and a large number in FAROFFPOS.
- In order to use indexes optimally,
- Check to insure that indexes leverage uniqueness
- Make sure that clustering indexes make sense and that the table is clustered in the most popular application processing sequence order and maintains a CLUSTERRATIO of 95% or higher
- Make sure that index columns are in optimum order for your most frequently executed application WHERE clauses.
- Make sure that all defined indexes are being used
- Do you have a reorganization schedule an appropriate freespace that keeps DB2 tables cleaned up with a minimum of outage?
If you do the intensive review suggested here, you'll find that you will become the "go to" person for DB2 performance. In these economic times, saving the corporation money will make a positive impression within the company.
Dave Beulke is an internationally recognized DB2 consultant, DB2 training and DB2 education instructor.Dave helps his clients improve their strategic direction, dramatically improve DB2 performance and reduce their CPU demand, saving millions in their systems, databases and application areas within their mainframe, UNIX and Windows environments. Search for more of Dave Beulke’s DB2 Performance blogs on DeveloperWorks and at www.davebeulke.com.
The North American IDUG Conference and the IOD EMEA conference highlighted all the functionality and features within DB2 10. The more you hear about these functions and features, the more you realize they are really going to save your company money right away.
Justifying a DB2 upgrade is always necessary for management and for the new DB2 10 it is going to be a lot easier with its many CPU-saving features. With all the new features saving from 5-7% right out of the box in conversion mode, DB2 10 offers existing systems and applications immediate savings. Add in all the new options such as Hash Access, Index Include Columns and other features and developer and DBA activities become a lot easier, reducing your overall DB2 CPU capacity requirements possibly up to 20%. Think of getting a 20% decrease in your costs and the costs of the associated third party software and you can see why management is going to push for getting DB2 10 installed as soon as possible.
One of first ways DB2 10 reduces your CPU within your systems and applications, is through a new access path called Range-List Index Scan. This new access path should immediately assist many of your applications that have multiple WHERE predicates that reference the same index. In DB2 9 the access paths scanned the index for each of the WHERE predicate references. Within DB2 10 the Range-List Index Scan scans the index only once to retrieve all the RIDs, saving the extra CPU and I/Os of multiple separate scans, RID consolidation and elimination of duplicate RID entries. This can be a huge improvement for applications that have paging or other types of searching SQL WHERE predicates
When your applications are being prepared for migration to DB2, the plan stability features started in Version 9 and improved with DB2 10 will assist you. Within DB2 10 this enhancement lets the application manager lock down any package for a particular access path. If the access path is changed through an attempt to rebind the package, the process can supply an error or provide a warning within the Bind process. This protects your access paths against access path problems and alerts your developers to a possible problem.
So if you want a 20% decrease in your CPU demand or just want to start thinking about staying up with your competition, start your plans for DB2 10 now
Dave Beulke is an internationally recognized DB2 consultant, DB2 training and DB2 education instructor. Dave helps his clients improve their strategic direction, dramatically improve DB2 performance and reduce their CPU demand, saving millions in their systems, databases and application areas within their mainframe, UNIX and Windows environments.
DB2 upgrades are not the same in any two companies and may not even be the same within the same company. As multiple DB2 systems sometimes support different applications with varying requirements, the Return on Investment (ROI) is completely different. Also, comparing DB2 z/OS or DB2 LUW to Oracle system upgrades also does not even begin to be similar or comparable as all these databases have completely different contract clauses, ROI requirements and upgrade factors. Apples and oranges are both edible, but that is where the comparison and similarities stop.
DB2 upgrades depend on the technology. First-hand experience has shown that bad research, poor database design, poor application development techniques, bad choices in third party vendor tools and an assortment of other factors make ROI equations for upgrades complex or almost impossible. Having helped fix bad technology decisions, redesigned bad database designs and fixed SQL application performance problems, I know that reducing costs is always on everyone’s mind. Upgrading any software especially raises cost consciousness.
To minimize upgrade costs, strive for better understanding and knowledge of the DB2 technology. This will help everyone avoid unnecessary costs and bad decisions. Work on minimizing the DB2 10 upgrade costs through better management of your DB2 systems today. I’ve mentioned the considerations of SMS environments and improving your security definitions to get ready to separate security data access.
This week the suggestion for preparing for DB2 10 is to begin analyzing your DB2 application techniques. The first thing to analyze is how your plans and packages are put together. IBM’s DB2 engineers have been telling everyone to get to DB2 packages within their environment for many releases. In DB2 10 the old plan management technique of DBRMs going directly into plans needs to be replaced by collecting them within COLLIDs and then binding into your DB2 plans. It is a small difference but it has big implications. If your shop still uses the technique of putting DBRMs directly into plans, it is time to start a process to migrate to the new method.
Next, the compliers that your applications utilize also need to be current. DB2 10 changes within the COBOL pre-compiler generate new SQLCA and SQLDA structures along with generating COMP-5 binary data attributes instead of COMP-4 or COMP. Even though there are still some ways to handle old compilers it is best to review your COBOL compiler versions to make sure they will be ready for your eventual DB2 migration.
Dave Beulke is an internationally recognized DB2 consultant, DB2 trainer and education instructor. Dave helps his clients improve their strategic direction, dramatically improve DB2 performance and reduce their CPU demand saving millions in their systems, databases and application areas within their mainframe, UNIX and Windows environments.
ago, IBM System Z announced the introduction of additional new specialty
processors. The first System Z specialty engine introduced for data sharing was
the internal coupling facility (ICF), next the integrated facility for Linux
(IFL), next the application assist processor (zAAP) and the latest is the
integrate information processor (zIIP). All of these specialty engines take the
burden off the general processors, reducing the software licensing costs within
System Z environments.
savings from the zIIP and zAAP specialty engines are especially dramatic when
processing different types of DB2 workloads. Within the System Z environment,
the zAAP engines are available for Java workloads and XML System services. This
Java processing is especially important for the System Z application server
workloads running on WebSphere, Tomcat and other Java web and application
processing. Next, as the unstructured data world integrates into database
systems, the XML processing workload offloaded to zAAP engines continues to
assist the growth within everyone’s environment.
engines are especially great for handling some very popular and intense
processing within DB2. The first is for the processing of remote DB2 access
processing, parallel SQL query work, IPSec network encryption, additional XML
services, and assisting with the DB2 utility sort processing. It is critical to
understand and analyze the potentially substantial quantity of these types of
processing used by these specialty processors in your environment.
recent client, the DBAs saw large requirements of these types of workloads and
prompted the capacity planners to dig into the zIIP and zAAP utilization
analysis. Analysis discovered that the environment workload was maxing out the
zIIP engines and the workload was taking capacity from the general processors.
The client ordered and installed more zIIP processors and reduced their general
processor utilization by providing more zIIP capacity. This also reduced their
overall applications response time and provided more overall capacity at a
minimum of cost.
and zAAP processors utilization is not always on the radar but needs attention
periodically. The capacity planners’ and DBAs’ analysis should dig deeper into
the overall utilization of these specialty processors and analyze whether
additional specialty engines might relieve the burden on your general
processors. Not only will these zIIP and zAAP processors relieve the stress on
your general processors, but they can also reduce your overall peak CPU
utilization and your MSU software licensing costs. This will save your company
money while zIIPing and ZAAPing your DB2 performance and your overall
Dave Beulke is an
internationally recognized DB2 consultant, DB2 training and DB2 education
instructor.Dave helps his clients improve their strategic direction,
dramatically improve DB2 performance and reduce their CPU demand, saving
millions in their systems, databases and application areas within their
mainframe, UNIX and Windows environments. Search for more of Dave Beulke’s DB2
Performance blogs on DeveloperWorks and at www.davebeulke.com.
Whether you are currently using DB2 Version 8 or Version 9, getting ready for all the CPU efficiencies in DB2 10 is easy. Your company may not be thinking about migrating to DB2 10 yet, but DB2 10 is getting huge industry wide support as the cost savings began to be realized by management. Everyone wants to save money and with everyone trying to squeeze their budgets, a faster adoption of DB2 10 is in your future. It is best to be prepared, so here are a few things you can do
in DB2 Version 8 that will make it easier to get to DB2 10.
Embrace System Managed Storage SMS
No, I am not Darth Vader telling you to embrace the dark side, but to let you know that DB2 10 catalog and directory use SMS. Using SMS is a hard thing to do for many companies and personally, I am not a real big fan of SMS. DB2 10 and additional improvements in storage have made me think twice and SMS may be more help than a problem. Being a performance geek, I recommend limiting or not sharing storage volumes within the DB2 SMS definitions. The SMS performance key, if possible, is to segregate the DB2 SMS definitions completely, yes completely, away from other storage definitions. SMS horror stories about the I/O troubles are well known and through my consulting clients I see SMS at the center of many I/O problems. Embrace SMS but be diligent guarding your DB2 system’s I/O performance and especially your DB2 catalog and directory data sets.
Sharpen Your Security
I am a big supporter of compliance, protecting data assets, security audits and all the security team’s efforts for protecting the company and all the business information. DB2 10 offers more privileges and authority granularity in your DB2 systems and security experts will need to embrace these features to separate out your data access, utility operations and day-to-day work. Start auditing your security usage of your systems area, various DBA support groups, different local and distributed applications, and analyze their needs. Do the activities require data access? Could the DB2 DBA work maybe been done using a different security level such as DB2 SYSOPR or another security authority? More DB2 10 security privileges and authority granularity improvements are going to affect your security profiles. Get ready and start the security analysis now.
Learn more about DB2 10 through the replay of Roger Miller’s and my 2010 webcast: DB2 10 for z/OS – Helping you improve operational efficiencies and gain competitive advantage. The webcast replay is available here…
Modified by DaveBeulke
When I talk to millennials coming into management and programming it appears that their knowledge of the mainframe is very limited or non-existent. Since many universities have stopped teaching mainframe classes, the recent college graduates don’t even have a basic understanding of the mainframe computing model, much less its advantages and why it endures through all the attempts to replace it. Since there is no need to reinvent the wheel, we all need to help the millennials understand why the mainframe will always be around with the following four important points.
1. Mainframes are the most reliable, stable, and available systems. The reason most of the world works is because of mainframe computer’s reliability, stability and availability which provides electricity grids, bank ATM networks, credit card transactions, and other services that countries, civilization, and society depend on. The mainframe systems are engineered with the most state-of-the-art hardware and software advances. Since mainframe computers are almost always running at 100% utilization, the platform demands the fastest storage, memory capacities, and CPU chip technology to get processing done efficiently and effectively. Some mainframe computer systems have been operating constantly 24/7/365 for decades.
2. The best system to serve them all. Since hardware and software upgrades are constant, the mainframe processing architecture is built to be redundant so operations can continue while upgrades occur. This provides the ability to support an ever increasing number of operating systems, virtual environments, and user bases with an optimum number of software components and support personnel. Also, since the mainframe capacity can be scaled up horizontally almost limitlessly by adding additional machines the processing power can securely segment and service side by side any and all types of test, QA, production, old, new, batch and interactive state of the art applications from a centralized platform. By servicing mainframe, UNIX, and other operating systems in virtual environments the mainframe can serve data and processing power for any requirements.
3. Mainframe has the most advanced technology available. The mainframe has been around for over fifty years and continues to be at the forefront of hardware and software technology advances. Many of the hardware and software technologies that are available in any big or small computing systems were developed first in the mainframe environment.
From virtual environments, memory management, and multi-tier architectures to cloud-time sharing computing model types, most computer technologies were first developed, leveraged, and continue to be used in mainframe facilities around the world. The most powerful CPU chip speeds, the number of parallel CPU chips, and multi-threading architectures are currently available on the mainframe. The different storage types from tape to flash memory can be architected to the extreme within the broad capacity of the mainframe. Additionally, all these hardware and software technology capabilities are backed up through comprehensive administration capabilities. The mainframe provides flexibility to customize the hardware and software environment and performance settings to optimize processing CPU and I/O capacity requirements while balancing and maximizing utilization.
4. Mainframes provide the lowest cost of ownership. First the efficient mainframe power and cooling requirements are much cheaper than equivalent distributed UNIX or Windows platforms. Personnel costs are as much as 60% budget for any computing environment. The number of personnel required to administer, configure, and maintain non-mainframe computing environments is much greater compared to the cost efficient mainframe environments.
Even with open-source solutions, software license and maintenance costs for the distributed environment are usually extremely more expensive for the multitude of test, QA, and production environments than in a mainframe environment. Since the mainframe has been around for decades, it has chargeback methodologies in place to track every aspect, a process that has negative connotations, but actually provides better cost allocation across the corporate structure. Since the distributed systems don’t have chargeback it sometimes makes it difficult to add up all the same factors. When all the distributed environments factors are monitored and documented, they are much more expensive, have troubled availability records, and are tremendously underutilized computing environments.
As the millennials join the technology workforce, they may naïvely say that they want to replace the mainframe. It’s our responsibility to educate another generation so they can learn that the reliability, availability, security, and, foremost, the costs of the mainframe continue to make it the best computing platform available. Possible applications on the mainframe are only limited by your imagination, because today it provides all types of analytics, mobile, social, web based, or transactional computing abilities which make the world go around.
As the 2011 European IDUG approaches, it’s good to remember the annual North American IDUG conference in May where the latest and greatest information about DB2 was unveiled. The IDUG Board of Directors and especially the Conference Planning Committee did a great job providing qualified users, consultants and IBM developers as speakers to discuss their technical experiences with DB2 for z/OS and LUW, including the new release of DB2 10. DB2 performance tuning, application development and DBA topics filled the agenda and everyone learned the latest techniques to help their companies gain the competitive advantage
DB2 10 Beta program was the largest with testing by 24 companies and the most successful testing period ever for a new DB2 version.
The DB2 10 for z/OS software is only about a year old and is definitely being tested by an ever growing number of companies. The very nature of a new release requires the regular introduction of the maintenance for both z/OS and DB2 for the best stable environment and to fully leverage its performance tuning improvements. It is recommended that companies planning to go to DB2 10 should plan on at least quarterly maintenance cycles to fully incorporate all APARs, Hipers, and other fixes or enhancements.
DB2 10 continues to be enhanced due to some of the issues discovered during the beta program and other items that needed additional work. The DB2 for z/OS Version 9 Plan Stability feature is getting a lot of attention with its ability to help with production package access path preservation. The DB2 10 additional features of APCOMPARE and APREUSE are still being worked on. Even if these new enhancements don’t make it in the release, Plan stability continues to be a great feature introduced in DB2 for z/OS Version 9 for guaranteeing that access paths stay the same or improve with every new developer version.
For those of you that are looking for the DB2 10 White Paper the sign up link for the free download of the PDF from the IBM website is below. You have to sign up because they want to know how many people download the DB2 10 White Paper, but it’s free and you can opt out of any email spam. So download it today at:https://www14.software.ibm.com/webapp/iwm/web/signup.do?source=sw-infomgt&S_PKG=db2_zos_reduce_costs
It is usually pretty easy to quickly implement new DB2 features. DB2 makes it easy to improve your database performance with a new zParm or table space definition. Unlike most of these new features, however, DB2 temporal tables need research before you implement them. DB2 temporal tables offer great flexibility and many data warehouse design options that can be leveraged very effectively--or be abused--with the wrong application design.
To prepare you need to evaluate whether the application is appropriate for temporal tables. First with DB2 temporal tables it is even more important to determine the frequency of the inserts, updates and deletes that are going to happen. Frequencies are always a good design point for any application but it is especially important for DB2 temporal tables because of the way BUSINESS_TIME or SYSTEM_TIME is maintained and how all the data changes are captured within the associated history temporal table. Every data change could really be two processes because rows need to be replicated into your history table. That could be a major performance consideration.
The next research points are the restrictions with DB2 temporal tables and history tables. The temporal table must be a regular table with the added BUSINESS_TIME or SYSTEM_TIME. No clone table capabilities, column masks, row permissions or security label columns are allowed. The same restrictions are in place for the history table.
The temporal table and its associated history table must be kept in sync. Restrictions also exist regarding the altering, adding or removing columns into the temporal table and the history table to guarantee integrity. Also backup and recovery for the temporal table and its history table must be kept in sync and there are restrictions around DB2 Utilities that could delete data from these tables. In addition once the history table is defined, its table space or table cannot be dropped. So make sure the columns desired in your DB2 temporal tables are stable and well defined for the application.
There are several resources that should be reviewed before designing your first application using DB2 temporal tables. The first is the IBM DB2 10 manuals. By reading these friendly manuals you get a good understanding of the syntax, various examples and details about all the restrictions. Next there have been some presentations about DB2 temporal tables at past IDUG conferences and IOD conferences. Track down you colleagues that went to the conferences and get the CD or access to the website for the presentation downloads.