In July 2010 IBM announced the zEnterprise with its new integrated IBMPOWER7
and IBM System x blades environments. This provides users with the ability to
run traditional mainframe applications alongside any UNIX and Intel based
applications. These new capabilities provide a one-stop cloud environment for
businesses to deploy and scale any application from any platform.
In August 2010 IBM announced the new deployment options for the DB2
pureScale system to the IBM System x environments. The ability to leverage the
DB2 pureScale environment provides the same multi-member DB2 data sharing
mainframe capabilities within a new SUSE Linux based solution. Testing shows
that the DB2 pureScale environment performs and scales almost linearly, like
the z/OS DB2 Data Sharing mainframe environment.
Providing these DB2 Data Sharing type capabilities within the System x
architecture provides another great performance and scalability option within
the DB2 LUW family. The DB2 pureScale option with its tremendous scalability
provides a great expansion option for existing DB2 LUW systems that are running
out of capacity within their single computing environment footprint. And now
with the ability to deploy them to the zEnterprise, companies that have DB2 LUW
systems can get the reliability and performance management of the mainframe
This DB2 pureScale on System x also provides another DB2 LUW open solution for companies that need to
consolidate UNIX or Intel based environments and applications. So regardless of
the application or system requirements, the DB2 family provides the most open
cost efficient options. Check out the full IBM DB2
pureScale announcement at IBM.
In this post, I’ll delve a little deeper into the system architecture and CPU reduction opportunities found in a major DB2 system at a large financial institution. (To see first part of case study, click here
In order to do a complete performance analysis of the system, the system and application statistics were reviewed using the standard DB2 performance reports. This data provided a basis of the various system, database, application and SQL observations and improvement recommendations. These statistics along with system, process, and application documentation, interviews with application programmers and observations of the workload guided the investigation of the CPU consumption and CPU reduction effort.Current Enterprise Architecture
The enterprise architecture had evolved over the years to support many diverse database systems. This caused several databases to be cloned and their transactions workloads to be intermixed. This combination of CICS transactions provided a diverse workload of different data requirements, run time durations and application types.
This combination of workloads runs on a single CPU mainframe environment that supports both the test and production environments. Workloads come into the system through a variety of interfaces: CICS, Visual Basic and Web applications using MQ Series Connect and batch jobs throughout the day. These applications access a variety of database tables that support the corporation’s nation-wide business needs.
The enterprise applications environment with a mix of applications operates efficiently experiencing occasional dramatic CPU application requirement spikes. These application CPU requirement spikes manifest themselves throughout the day when CICS program errors occur and dumps are created. These dumps cause the system to pause and dump the transaction situation. This occurs too frequently; almost once every 15 minutes in the production CICS region. Busy business periods of multiple concurrent transactions with a large memory footprint also show stress within the systems.
Work Load Manager
The architecture of the system and its performance are controlled through a variety of software with Work Load Manager (WLM) playing a central role in overall system performance.WLM controls CPU and provides priorities of the different subsystems, online workload and batch processes.
Analysis of the WLM settings needed to be done to determine the optimum and most efficient workload software settings and determine whether the DB2, CICS, and batch transaction have the compatible settings to maximize throughput.
Observing the system processing discovered that the workflow accomplished is fluctuating when the systems has errors or dumps occurring in the various CICS regions. These dumps against the system workflow showed that the system CPU peaked and workflow was severely impacted.
When an on-line program error or dump occurs its core dump documentation and resolution are the highest priority within the system stopping or pausing all other work. An example of the problem occurred by 10:30 a.m. on a summer day. Five regions had 27 errors/dumps occur by that time, which is one every four minutes (27/150 minutes) during the production work day. Industry standards typically have a very small number of these errors or dumps occur in their production regions. This problem directly related to the application quality assurance testing and this situation will only continue to degrade the overall workflow and overall performance of the systems.
CICS Region Configuration and Allocations
The architecture of the CICS systems and the on-line programs reflects how additional data and capabilities have been added. New CICS regions and databases have been added to the workload as additional systems were added to the workload and additional features added to the applications.
These workloads were each separated into their own regions. To improve the overall workflow and provide further room to efficiently grow the CICS transaction workload a Sysplex architecture could be considered. The CICS Sysplex architecture separates the workload out to terminal owning regions (TOR), application owning regions (AOR) and data owning regions (DOR) that can be finely tuned to service each specific type of workload. These regions work together to spread and balance the peak transaction workloads.
All of these architecture, system, database, application and SQL considerations provide the opportunity for CPU cost reductions. These cost reductions could be achieved through system tuning, database design analysis, application SQL documentation and application coding standards and reviews. Implementing these has the potential of saving tremendous CPU capacity and delaying a CPU upgrade.
- Analyze the number of abends, deadlocks and the number of dumps within different parts of your applications. These deadlocks and dumps take a tremendous amount of CPU resources at critical times within your system.
- Make sure that your Work Load Manger (WLM) is set up properly to distribute the CPU resources adequately and properly to the various database systems and applications. Having the database at the same or below the applications can cause performance and overall throughput problems.
- Validate the settings between your CICS transaction configurations. Make sure the maximum workload from one TOR, AOR or DOR is not overwhelming another CICS processing partner.
Dave Beulke is an internationally recognized DB2 consultant, DB2 trainer and education instructor. Dave helps his clients improve their strategic direction, dramatically improve DB2 performance and reduce their CPU demand saving millions in their systems, databases and application areas within their mainframe, UNIX and Windows environments.
We’ve checked into the DB2 Sort Work, EDM, RID and Buffer Pools. During this post, I’ll talk about some of the other standard places to check for performance improvements.
DB2 System Maintenance
The DB2 system software maintenance from IBM contains many fixes and performance adjustments in its software maintenance stream. When investigating this company’s maintenance levels, I discovered that their DB2 system is behind on its maintenance level, which does not allow the latest performance improvements to be leveraged. Maintenance also needs to be coordinated with the implementation of pre-tested Service Packs related to other IBM software products.
These Service Packs test the compatibility between z/OS, IMS, MQ Series, CICS and DB2 and can help eliminate maintenance compatibility issues. By evaluating the latest release compatible with operating system, MQ Series, CICS and other software connecting to DB2, the company can apply the correct maintenance level for their DB2 Version. Yearly maintenance plans need to be developed to help all departments understand the dependencies and the need to apply maintenance on a regular schedule.
Dynamic Statement Cache Pool Sizing and Settings
Additional analysis showed that the Dynamic Statement Cache (DSC) was being leveraged for application efficiency. This recently implemented feature was working well and only needed to be fine-tuned. (The DSC holds SQL statements executed frequently and does not have to re-determine the access path, verify object existence or re-check security if various settings are the same in subsequent executions.)
A good portion of the SQL statements at the company were being cached letting, DB2 use the previously optimized SQL executing in the system. Leveraging the DSC area has usually shown a 2 to 3% CPU savings per SQL transaction and should be monitored closely to make sure to maintain its efficiency.
If your environment executes a large percent of dynamic SQL applications, the savings from leveraging the DSC area deserves on-going attention.
Checking the various aspects of your DB2 system can have a great impact on the performance of your system. Take a look at these areas to improve system performance:
- Is your DB2 System Maintenance at the appropriate level? Do you have maintenance plans that include checking the Service Pack levels to ease the integration with IMS, CICS, MQ Series and other software within your environment?
- Is your Dynamic Statement Cache Pool set to the appropriate size for your system? Are your settings encouraging SQL caching?
Dave Beulke is an internationally recognized DB2 consultant, DB2 trainer and education instructor. Dave helps his clients improve their strategic direction, dramatically improve DB2 performance and reduce their CPU demand saving millions in their systems, databases and application areas within their mainframe, UNIX and Windows environments.
To figure out the best temporal table design aspects you need to think of the various options and considerations that will affect its performance. The most important aspect for your temporal table is the answers that your applications or users are expecting from it. The best way is to figure out the time aspect that the application is trying to capture. Are your applications looking for the financial value, the insurance coverage level, enrollment status, customer value or something else?
The temporal table status can be contingent on two types of settings: business time or the system processing time. If the processing is delayed and the system time is later than expected, does that affect your temporal table status? Or are you using the temporal table in a real time scenario where either the business or system time will affect the meaning of the data? There are many ways to respond to the situations and questions, but the design decision should be based on the application and user questions that need to be answered. So it is best to test both SYSTEM_TIME and BUSINESS_TIME scenarios out and see which design provides the best answers with the best performance.
The next design point is to figure out your timestamp type. Do your temporal table application answers require distinct timestamps throughout the system? Your DB2 10 system now has new capabilities to provide a column that is unique within the table system wide. This DB2 syntax is defined WITHOUT OVERLAPS and can be used for your temporal table only for your BUSINESS_TIME values. After the temporal table is created, an index is defined for it using your unique columns and the BUSINESS_TIME WITHOUT OVERLAPS keyword. BUSINESS_TIME is the only option the WITHOUT OVERLAPS keyword works with.
When BUSINESS_TIME WITHOUT OVERLAPS is specified, the columns of the BUSINESS_TIME period must not be specified as part of the constraint. The specification of BUSINESS_TIME WITHOUT OVERLAPS adds the following to the constraints:The end column of the BUSINESS_TIME period in ascending orderThe start column of the BUSINESS_TIME period in ascending orderThe minimum value of a TIMESTAMP(12), the value is 0001-01-01-00:00:00.000000000000The maximum value of a TIMESTAMP(12), the value is 9999-12-31-24:00:00.000000000000
For DATE the minimum is 0001-01-01 and the maximum value is 9999-12-31.
A system generated check constraint named DB2_GENERATED_CHECK_CONSTRAINT_FOR_BUSINESS_TIME is also generated this definition process to ensure that the value for end-column-name is greater than the value for start-column-name. BUSINESS_TIME WITHOUT OVERLAPS must not be specified for a PARTITIONED index.
There are a number of considerations when creating your DB2 10 temporal table. When your application needs it to be unique, the system wide the BUSINESS_TIME option provides the capabilities with some cautions. Check out other posts on temporal tables via Developer Works or at my site, www.davebeulke.com.
It is usually pretty easy to quickly implement new DB2 features. DB2 makes it easy to improve your database performance with a new zParm or table space definition. Unlike most of these new features, however, DB2 temporal tables need research before you implement them. DB2 temporal tables offer great flexibility and many data warehouse design options that can be leveraged very effectively--or be abused--with the wrong application design.
To prepare you need to evaluate whether the application is appropriate for temporal tables. First with DB2 temporal tables it is even more important to determine the frequency of the inserts, updates and deletes that are going to happen. Frequencies are always a good design point for any application but it is especially important for DB2 temporal tables because of the way BUSINESS_TIME or SYSTEM_TIME is maintained and how all the data changes are captured within the associated history temporal table. Every data change could really be two processes because rows need to be replicated into your history table. That could be a major performance consideration.
The next research points are the restrictions with DB2 temporal tables and history tables. The temporal table must be a regular table with the added BUSINESS_TIME or SYSTEM_TIME. No clone table capabilities, column masks, row permissions or security label columns are allowed. The same restrictions are in place for the history table.
The temporal table and its associated history table must be kept in sync. Restrictions also exist regarding the altering, adding or removing columns into the temporal table and the history table to guarantee integrity. Also backup and recovery for the temporal table and its history table must be kept in sync and there are restrictions around DB2 Utilities that could delete data from these tables. In addition once the history table is defined, its table space or table cannot be dropped. So make sure the columns desired in your DB2 temporal tables are stable and well defined for the application.
There are several resources that should be reviewed before designing your first application using DB2 temporal tables. The first is the IBM DB2 10 manuals. By reading these friendly manuals you get a good understanding of the syntax, various examples and details about all the restrictions. Next there have been some presentations about DB2 temporal tables at past IDUG conferences and IOD conferences. Track down you colleagues that went to the conferences and get the CD or access to the website for the presentation downloads.
DB2 offers application designers new functionality for their data warehousing requirements. The new DB2 10 Temporal Tables provide a way to have a snapshot in time of the status of customers, orders or any other type of business situation.
DB2 Temporal Tables, with their built in functionality, automatically understand the business time or system time of the data entered into the system. This functionality is ideal for handling and documenting the condition of the any business aspect at a certain time. This functionality is driven from two new column definitions, BUSINESS_TIME and SYSTEM_TIME, defined within a table definition. Using these new time period columns within a DB2 Temporal Table definition provides a system-maintained, a period-maintained or bi-temporal time period for your data.
Many systems today have manual processes or utilities that manage or migrate their real time data to history tables. The new DB2 Temporal Tables with their new system time and business time columns can be used in conjunction with a user-defined trigger to automatically migrate transactional temporal table data to another user defined HISTORY table. Having these facilities built into the database greatly improves regulatory compliance, operations and overall DB2 performance tuning.
Separating out the real time transaction data versus the old data within your database using the HISTORY table requires planning and design steps. The separation of the old data from new data guarantees application and SQL performance does not suffer when your database is fully populated. Separation of the old and new data also helps DB2 performance tuning management so more resources can be delegated to maintaining base new transaction data where DB2 performance tuning matters for business operational success.
Over the coming weeks I will go through the steps and design decisions required to set up a Temporal Table. We will go through the SYSTEM_TIME, BUSINESS_TIME and a bi-temporal table design.
Modified by DaveBeulke
Previously, I talked about the first alphabetic group of DB2 data warehouse DSNZPARMS that can improve your access paths and overall application performance. This week the second set of DSNZPARMS are discussed. Many of the data warehouse DSNZPARMS discussed are somewhat hidden within the regular DSNZPARM install panels. All of these DSNZPARMS discussed are available in DB2 for z/OS DB2 Version 9. Some are available in DB2 Version 8.
Caution needs to be taken with all system settings and especially these data warehouse DSNZPARMS. These DSNZPARMS are meant to change access paths and improve them, but each data warehouse design is unique, along with each application access path, so results will vary. If the data warehouse DB2 subsystem is shared with other OLTP or operational applications, I highly recommend fully documenting and setting up a full PLAN STABLITIY plan and package management structure for your current access paths before changing any DSNZPARMS. This documentation along with a good PLAN STABILITY DB2 plan and package management implementation and back out practices helps you quickly react to your environment and back out any detrimental access paths encountered through unexpected rebind of any program.
Some of the comments from previous blogs on data warehouse applications highlighted the resurgent of data warehousing on the z/OS platform and why running a data warehouse on z/OS provides many advantages over other platforms. One that was noted from several people is when your data warehouse runs on z/OS, the huge ETL processes don’t usually have to transmit the data over a network. Even though the network bandwidth is robust, avoiding this extra bottleneck can sometimes save hours of overhead, guaranteeing that your refresh data jobs have enough time every day to provide critical refreshes of you data within your data warehouse. Additionally, most of your source data warehouse data comes from the z/OS operational systems and can quickly be put into operational business intelligence data warehouses. This fresh data increases sales, provides real time inventory or product availability updates and, most importantly, removes latency for all your critical single point master data source of record for the enterprise.
Improve your system and application performance by adjusting these data warehouse DSNZPARMS to improve your access paths and by using the superior DB2 optimizer technology and most efficient performance available.
To get Part 2 of the DB2 V9 DSNZPARM settings, click here.
DB2 Java performance is often a problem because the application processing is emulating the database which executes more efficiently or the processing is poorly designed. Either of these scenarios that my teams have found during performance or design reviews of DB2 Java performance of systems and applications always led to extended I/O activities and excessive CPU usage.
Too often, when the DB2 Java application was designed, the full scope of the eventual implemented processing was unknown. The specifications or even the coding of the backbone framework processing began before everything was known or so many additional processing add-ons were bolted on the transaction that the original design no longer fits the transaction and it no longer performs well. When additional functions are added into the transaction scope many times, the additional data retrievals are not added into the existing SQL processing. Instead they are coded as additional stand-alone SQL calls. This leads to SQL statement after SQL statement being executed during the single transaction of a DB2 Java application.
These add-on transaction functions typically add additional SQL to the transaction unit-of-work. This leads to the DB2 Java system transactions that seem to need hundreds or even thousands of SQL database calls to process their transaction from beginning to end. These large numbers of SQL calls usually touch and lock a large number of tables, inspect the data and finally perform the transaction processing. This situation typically uses excessive CPU and performs a large number of unneeded I/Os.
By not combining or enhancing the existing SQL to retrieve the additional data, the overall number of calls continues to expand and the DB2 Java database performance continues to suffer. The application design is needlessly neglected when these new requirements come along. When the changing requirements result in additional SQL calls with the application itself evaluating or combining new SQL data with an existing object data store, the result is more CPU usage and poor response time.
To avoid these types of situations in your DB2 Java application, understand all the data that is needed by your transaction. The application processing that combines or reevaluates data needs to be pushed back into the existing database SQL statements. DB2 does it much more efficiently. Retrieving additional data is bad for I/O, CPU, locking, and overall performance. So next time your DB2 Java transaction needs additional functionality don’t just add on, integrate your new functions into the existing SQL and designs of your DB2 Java application database processes.
Modified by DaveBeulke
When I talk to millennials coming into management and programming it appears that their knowledge of the mainframe is very limited or non-existent. Since many universities have stopped teaching mainframe classes, the recent college graduates don’t even have a basic understanding of the mainframe computing model, much less its advantages and why it endures through all the attempts to replace it. Since there is no need to reinvent the wheel, we all need to help the millennials understand why the mainframe will always be around with the following four important points.
1. Mainframes are the most reliable, stable, and available systems. The reason most of the world works is because of mainframe computer’s reliability, stability and availability which provides electricity grids, bank ATM networks, credit card transactions, and other services that countries, civilization, and society depend on. The mainframe systems are engineered with the most state-of-the-art hardware and software advances. Since mainframe computers are almost always running at 100% utilization, the platform demands the fastest storage, memory capacities, and CPU chip technology to get processing done efficiently and effectively. Some mainframe computer systems have been operating constantly 24/7/365 for decades.
2. The best system to serve them all. Since hardware and software upgrades are constant, the mainframe processing architecture is built to be redundant so operations can continue while upgrades occur. This provides the ability to support an ever increasing number of operating systems, virtual environments, and user bases with an optimum number of software components and support personnel. Also, since the mainframe capacity can be scaled up horizontally almost limitlessly by adding additional machines the processing power can securely segment and service side by side any and all types of test, QA, production, old, new, batch and interactive state of the art applications from a centralized platform. By servicing mainframe, UNIX, and other operating systems in virtual environments the mainframe can serve data and processing power for any requirements.
3. Mainframe has the most advanced technology available. The mainframe has been around for over fifty years and continues to be at the forefront of hardware and software technology advances. Many of the hardware and software technologies that are available in any big or small computing systems were developed first in the mainframe environment.
From virtual environments, memory management, and multi-tier architectures to cloud-time sharing computing model types, most computer technologies were first developed, leveraged, and continue to be used in mainframe facilities around the world. The most powerful CPU chip speeds, the number of parallel CPU chips, and multi-threading architectures are currently available on the mainframe. The different storage types from tape to flash memory can be architected to the extreme within the broad capacity of the mainframe. Additionally, all these hardware and software technology capabilities are backed up through comprehensive administration capabilities. The mainframe provides flexibility to customize the hardware and software environment and performance settings to optimize processing CPU and I/O capacity requirements while balancing and maximizing utilization.
4. Mainframes provide the lowest cost of ownership. First the efficient mainframe power and cooling requirements are much cheaper than equivalent distributed UNIX or Windows platforms. Personnel costs are as much as 60% budget for any computing environment. The number of personnel required to administer, configure, and maintain non-mainframe computing environments is much greater compared to the cost efficient mainframe environments.
Even with open-source solutions, software license and maintenance costs for the distributed environment are usually extremely more expensive for the multitude of test, QA, and production environments than in a mainframe environment. Since the mainframe has been around for decades, it has chargeback methodologies in place to track every aspect, a process that has negative connotations, but actually provides better cost allocation across the corporate structure. Since the distributed systems don’t have chargeback it sometimes makes it difficult to add up all the same factors. When all the distributed environments factors are monitored and documented, they are much more expensive, have troubled availability records, and are tremendously underutilized computing environments.
As the millennials join the technology workforce, they may naïvely say that they want to replace the mainframe. It’s our responsibility to educate another generation so they can learn that the reliability, availability, security, and, foremost, the costs of the mainframe continue to make it the best computing platform available. Possible applications on the mainframe are only limited by your imagination, because today it provides all types of analytics, mobile, social, web based, or transactional computing abilities which make the world go around.
I received some great comments and
questions based on the blog entry/video about the many new features in
DB2 Cobra. One of the more interesting questions was asked “How much work or
how compatible is the new DB2 with Oracle?”The answer DB2 is very compatible,
so compatible in fact, that many people have migrated from Oracle to DB2 in only
So what are the reasons or justification
for management to consider changing their database from Oracle to DB2? The top
four reasons I have heard recently are:
contract pricing continues to go up. Every time we look Oracle wants more money
because we had to add CPU cores to get application performance.
performance for our BI OLAP Oracle work does not respond. We always have to
rewrite the SQL or create new indexes to make it work. When we tried it on DB2,
their optimizer automatically rewrites the SQL avoiding the manual SQL rewrite,
improving application performance and overall response time.
disaster recovery and HADR feature is much better than any Oracle’s stand-by or
their fail-over solution. Now that the application is compatible with DB2 there
is no reason to risk the business because of an Oracle failure.
IT budget money is paramount these days and with our Oracle database, we
continue to spend money on expanding disk requirements. DB2 data compression
and the new index compression provide a way to reduce disk costs, improve
database performance and minimize backup and recovery while reducing disk
requirements and saving money.
Converting your database from Oracle to DB2 is very easy. You can save
money for your company, your IT department, and maybe your job by migrating
Dave Beulke is an
internationally recognized DB2 consultant, DB2 training and DB2 education
instructor. Dave helps his clients improve their strategic direction,
dramatically improve DB2 performance and reduce their CPU demand, saving
millions in their systems, databases and application areas within their
mainframe, UNIX and Windows environments. Search for more of Dave Beulke’s DB2
Performance blogs on DeveloperWorks and at www.davebeulke.com.
When I work with millennials these days in all different management and programming duties, the experience is always very interesting. Each millennial has a very different point of view of IT and data processing. Even within the Big 4 consulting firms’ new hires, the college experience, IT degrees, consulting company training, and aptitude for IT activities is widely different which leads to a lot of discussion about the best solution and the best procedures to use to get the project done.
Talking to millennials is always fascinating, and understanding their point of view is critical for communicating the structure of the massive systems that are running the business environment. Asking the millennials the following four questions can give you a huge understanding why they are having difficulties within their management, development duties and maintenance of legacy IT systems.
What kind of programming did you do in college? This is an interesting question for the millennials because some of them have never done any programming. Never. Now college graduates can get an IT degree without having to write a single program in any language throughout their entire college curriculum. Some have only done fewer than 10 programs in their entire four-year college degree program and the languages they used were Basic, Python, or some other scripting language.
Now I understand why all the companies are hiring people from other countries. Some of the non-Americans who I have talked to have at least a little more experience with Java, C++, COBOL, JSON, or similar languages. Understanding programming, its complexity, and its methodology is critical to writing requirements, specifications and efficient IT solutions. The next time you run into a college-aged millennial, ask them how much programming or what languages their college required for an IT degree. You might be surprised at the answer.
What is the best form of IT documentation you have worked with, and why was it so good? This is always an interesting conversation because it explores the millennial experience within their studies and brief work experience. It also explores their consulting firm training because usually they are taught a methodology to follow within their consulting firm. Sometimes the answers are really interesting in that the millennials only have Agile project experience Agile systems can produce good documentation, but they are notorious for not producing any documentation or, at best, useless documentation that has no substance or value.
Educating millennials on the development specifications and documentation requirements for your company or project is vital to reduce development confusion and overall on-going maintenance costs. Making sure everyone understands the difference between good and bad documentation and how good documentation can be done easily, quickly, and succinctly is critical for the long term success of your project.
What is the largest number of processes that have run concurrently on systems you’ve analyzed, designed or worked on? Exploring the millennial’s answer to this question is interesting because it gives you an understanding of their working knowledge of analysis, development, and programming of large systems. By the time I got my degree I had written programs in Assembler, PASCAL, COBOL, FORTRAN, RPG, Basic and done scripting in JCL, KSH and BASH. While I didn’t have any business experience when I started in IT many years ago, the many different programs I wrote, the programming languages, the case study analysis of the complex systems gave me the tools to ask some of the right questions.
Millennials who work only on smaller Windows and UNIX systems usually don’t have an idea about the requirements of concurrent or constant processing, or availability requirements. When discussing their answer talk about the criteria your company uses to prioritize different applications, the framework for the different priorities, and the different architectural factors that drive the corporation’s processing. Communicate that all the current processing has to continue, not just in the project that is currently under development.
Which do you think is more important process or data? This question to the millennials isn’t really fair because the answer in my mind is both. Listen to their answer and their justification gives you some insight into how they may be approaching the IT environment and development tasks.
If it is one thing I have learned throughout my career about process and data is that one isn’t any good without the other. Any architecture, designs or decisions made without considering both can be disastrous for database and processing designs. What is good about this question is that it explores the person’s attitude toward how to come up with an IT solution. Processing and data go hand in hand, and good IT solutions are not built without considering the requirements for both.
The IT profession is always changing and millennials have a tough job learning all the new and old technology. Use the four questions above to explore millennials’ background and point of view will lead to a greater understanding of the best way to solve the complex IT architecture, design and development issues.
Dave Beulke is a system strategist, application architect, and performance expert specializing in Big Data, data warehouses, and high performance internet business solutions. He is an IBM Gold Consultant, Information Champion, and President of DAMA-NCR, former President of International DB2 User Group, and frequent speaker at national and international conferences. His architectures, designs, and performance tuning techniques help organization better leverage their information assets, saving millions in processing costs.
Whether you are currently using DB2 Version 8 or Version 9, getting ready for all the CPU efficiencies in DB2 10 is easy. Your company may not be thinking about migrating to DB2 10 yet, but DB2 10 is getting huge industry wide support as the cost savings began to be realized by management. Everyone wants to save money and with everyone trying to squeeze their budgets, a faster adoption of DB2 10 is in your future. It is best to be prepared, so here are a few things you can do
in DB2 Version 8 that will make it easier to get to DB2 10.
Embrace System Managed Storage SMS
No, I am not Darth Vader telling you to embrace the dark side, but to let you know that DB2 10 catalog and directory use SMS. Using SMS is a hard thing to do for many companies and personally, I am not a real big fan of SMS. DB2 10 and additional improvements in storage have made me think twice and SMS may be more help than a problem. Being a performance geek, I recommend limiting or not sharing storage volumes within the DB2 SMS definitions. The SMS performance key, if possible, is to segregate the DB2 SMS definitions completely, yes completely, away from other storage definitions. SMS horror stories about the I/O troubles are well known and through my consulting clients I see SMS at the center of many I/O problems. Embrace SMS but be diligent guarding your DB2 system’s I/O performance and especially your DB2 catalog and directory data sets.
Sharpen Your Security
I am a big supporter of compliance, protecting data assets, security audits and all the security team’s efforts for protecting the company and all the business information. DB2 10 offers more privileges and authority granularity in your DB2 systems and security experts will need to embrace these features to separate out your data access, utility operations and day-to-day work. Start auditing your security usage of your systems area, various DBA support groups, different local and distributed applications, and analyze their needs. Do the activities require data access? Could the DB2 DBA work maybe been done using a different security level such as DB2 SYSOPR or another security authority? More DB2 10 security privileges and authority granularity improvements are going to affect your security profiles. Get ready and start the security analysis now.
Learn more about DB2 10 through the replay of Roger Miller’s and my 2010 webcast: DB2 10 for z/OS – Helping you improve operational efficiencies and gain competitive advantage. The webcast replay is available here…
Jeff Jonas' keynote session at IDUG Europe 2010 brought up several interesting thoughts and ideas. The sessions and conversations started and it seemed that Java, Hibernate and .Net systems have started to cause DB2 Java performance problems for a large number of companies. Many great hallway conversations pointed out how we all have great standards, code review, and EXPLAIN processes within our COBOL infrastructure, but have nothing within these other development environments, including DB2 Java. This is common and I always help clients with their DB2 Java performance by using Optimization Service Center, Visual Explain, the Optim Data Studio and Query Tuner products. All of these are great to quickly improve their DB2 Java, Hibernate and sometimes even .Net systems.
Java, Hibernate and .Net Projects
Several people wanted to hear about my experience with fixing DB2 Java performance, working with the Optim Data Studio products and how they can help with DB2 Java, Hibernate and .Net projects. We talked about how easy the SQL can get uncovered and then changed from dynamic JDBC Java processing to static SQL with the new IBM pureQuery product. For several companies their storage constrained DB2 systems can really use the reduction in the dynamic statement cache by getting these DB2 Java performance problems defined to be static applications. In addition, the bonus of getting a CPU reduction from Java, Hibernate and other JDBC connected applications from being static applications and not having to double check security, object existence and access plans is a huge business selling point for getting pureQuery implemented as soon as possible.
Be sure to join us at IDUG 2011 in Prague where I'll be presenting "DB2 10 Temporal Database Designs for Performance” on November 14th.
working with a client’s SOA environment showed several interesting DB2 performance
issues. One DB2 performance issue that was quite stunning was the large number
of connections that the .Net and Java applications were making to DB2 and other
systems. Researching the system and application further uncovered a wide
disparity in the handling and the amount of connections each of the many
application modules were using. Proper connection handling is very important to
DB2 performance because of three main reasons: acquiring new connections is
expensive, application connections maintain database locks and connections are
unit of work transactions.
getting a database connection is expensive because of all those great things
that a database provides such as security and integrity. Since the database is
important, every connection request must have its security checked and be
authorized. This security authorization against the database system and the
data desired is quick, but takes time. Next, when database processing
guarantees integrity, it is through its transaction logging of the unit of
work. Starting a new database unit of work again is fast but must be managed
within the database so that it can be backed out should the transaction
within the database connection, the SQL processing selects, inserts, updates, or
deletes data. These actions holds locks against the data referenced and prevent
applications from trying to update the same data or reference the same deleted
data. The DB2 applications have several mechanisms to control and handle this
locking within the application and system. The best way for DB2 performance is
to Bind the application against the database is using the Bind parameters for
cursor stability ISOLATION(CS) and CURRENTDATA(NO). This minimizes the
immediate locks held and allows other transactions more concurrency to the
data. If the application is read only and is not concerned with other
transaction manipulating the data then use uncommitted read ISOLATION(UR).
Using the ISOLATION(UR) setting is preferred for application referencing data
that doesn’t change.
application unit of work must maintain the connection. Large application
workloads that perform too many updates, inserts or deletes within a unit of
work hold on to too many locks and can cause extended back out times when an application
fails and impact DB2 performance. It is very important to have the proper
transaction commit scope, issue appropriate commits to minimize the amount of
locks and amount of work that the database may have to back out. It is also
critical for the applications to reference the database tables and perform
their updates in the same sequence. Referencing the data in same order acquires
and releases locks synchronously, allowing more application concurrency. Since
your application wants to minimize the number of locks and the time those locks
are held, it is always best to do your data updates and inserts right before
your application performs a commit or ends your transaction. This minimizes the
time the locks are held and again provides more workload concurrency.
of this information is standard practice for most applications, within the new
SOA architectures the services may not know much about the unit of work or
connection situation. Within one client SOA architecture, recent research showed
that a particular module had seven different connections active within its
service. The service had several connections; DB2 for z/OS, DB2 for LUW,
Oracle, MQ series inbound and outbound Queues and connections to application
and web servers for AJAX activities. It is a bit much to have all of these
connections within a single service and when some minor changes caused this
module to fail many processes could not function. Also debugging was very
difficult because one connection failure caused all the connection participants
to back out their transactions, causing more locking and data integrity issues.
sure your application handles connections properly because they are expensive
to acquire and impact DB2 performance. Minimize the number of database
activities within a transaction to minimize the locks and understand the number
of connections that are involved within a particular unit-of-work so that you
can get the best DB2 performance from your applications.
Dave Beulke is an
internationally recognized DB2 consultant, DB2 training and DB2 education
instructor.Dave helps his clients improve their strategic direction,
dramatically improve DB2 performance and reduce their CPU demand, saving millions
in their systems, databases and application areas within their mainframe, UNIX
and Windows environments. Search for more of Dave Beulke’s DB2 Performance
blogs on DeveloperWorks and at www.davebeulke.com.
Three Things to Do to Retain Your DB2 Query Performance
According to the majority of the DB2 10 Beta Customers, the performance figures for DB2 10 are true and sometimes better than the 5-10% saving right out of the box that is being advertised. But this is not to say that your DB2 10 migration and experience will be as good or better. Previous DB2 version migration horror stories abound and I have helped many clients tune and improve their DB2 system and query performance tuning. Some of these engagements have seen improvements in query performance tuning by correcting DSNZPARM settings that obviously got messed up. Did it happen during a migration, maybe? So here are three things you can do now in DB2 V9 that will help your application retain its current DB2 query performance tuning and get the most out of your DB2 10 system and application.
Gather Current Performance and DB2 Explain Output
To retain your query performance tuning going into DB2 10, you first need to understand and measure your current DB2 9 query performance tuning. Gather performance figures and EXPLAIN output for all your applications. Gather and understand the SQL access and overall processing of the application. Having these statistics before going into a DB2 10 migration is the first step to understand how much query performance tuning improvement your systems and applications are experiencing from the new DB2 10 features.
Leverage DB2 Version 9 BASIC and EXTENDED Plan Stability Features
To retain your query performance tuning leverage DB2 Version 9 BASIC and EXTENDED plan stability features. The DB2 9 BASIC and EXTENDED plan stability features are there to provide an easy way to preserve or fall back to a good package access path. By using this feature you can save off a good access path associated with the EXPLAIN information that was gathered in Step 1. Also by setting up the BASIC and EXTENDED plan stability features any special bind parameters or table/index statistics considerations can be exposed and documented before the migration to DB2 10. In addition the REBIND process will help your system make the transition to all DB2 packages from DB2 plans and bind everything in the current DB2 9. If you are migrating from DB2 Version 8, get all your DB2 Plans to Packages and make a copy of the Packages with a different OWNER or COLLID such as “SAVED” or something obvious. This way you can copy back, include the backup collection or manipulate your application to use these SAVED packages.
Determine Impact of DB2 10
Next, determine whether there is a high, medium or low probability that the DB2 packages will be able to leverage or be influenced by the new DB2 10 features. The DB2 10 improved parallel INSERT into multiple indexes feature will improve elapsed time but not CPU time. The new Stage 1 SQL Optimization improvements will potentially cut both elapsed and CPU time for your applications’ query performance tuning. Determine which of your individual application DB2 packages could see a benefit from the many DB2 10 features. (Go to my DB2 10 White Paper for a complete list of the DB2 10 enhancements.) Analyze your applications and understand which ones will benefit or may have potential issues before your DB2 migration. Use your Plan Stability BASIC and EXTENDED packages to use the best performing access path regardless of whether it is DB2 10 or DB2 9 and you will definitely have success and a good experience once your DB2 10 migration is complete.
Follow these three steps to insure your query performance tuning stays the same, or more likely, improves substantially.