DB2 upgrades are not the same in any two companies and may not even be the same within the same company. As multiple DB2 systems sometimes support different applications with varying requirements, the Return on Investment (ROI) is completely different. Also, comparing DB2 z/OS or DB2 LUW to Oracle system upgrades also does not even begin to be similar or comparable as all these databases have completely different contract clauses, ROI requirements and upgrade factors. Apples and oranges are both edible, but that is where the comparison and similarities stop.
DB2 upgrades depend on the technology. First-hand experience has shown that bad research, poor database design, poor application development techniques, bad choices in third party vendor tools and an assortment of other factors make ROI equations for upgrades complex or almost impossible. Having helped fix bad technology decisions, redesigned bad database designs and fixed SQL application performance problems, I know that reducing costs is always on everyone’s mind. Upgrading any software especially raises cost consciousness.
To minimize upgrade costs, strive for better understanding and knowledge of the DB2 technology. This will help everyone avoid unnecessary costs and bad decisions. Work on minimizing the DB2 10 upgrade costs through better management of your DB2 systems today. I’ve mentioned the considerations of SMS environments and improving your security definitions to get ready to separate security data access.
This week the suggestion for preparing for DB2 10 is to begin analyzing your DB2 application techniques. The first thing to analyze is how your plans and packages are put together. IBM’s DB2 engineers have been telling everyone to get to DB2 packages within their environment for many releases. In DB2 10 the old plan management technique of DBRMs going directly into plans needs to be replaced by collecting them within COLLIDs and then binding into your DB2 plans. It is a small difference but it has big implications. If your shop still uses the technique of putting DBRMs directly into plans, it is time to start a process to migrate to the new method.
Next, the compliers that your applications utilize also need to be current. DB2 10 changes within the COBOL pre-compiler generate new SQLCA and SQLDA structures along with generating COMP-5 binary data attributes instead of COMP-4 or COMP. Even though there are still some ways to handle old compilers it is best to review your COBOL compiler versions to make sure they will be ready for your eventual DB2 migration.
Dave Beulke is an internationally recognized DB2 consultant, DB2 trainer and education instructor. Dave helps his clients improve their strategic direction, dramatically improve DB2 performance and reduce their CPU demand saving millions in their systems, databases and application areas within their mainframe, UNIX and Windows environments.
Whether you are currently using DB2 Version 8 or Version 9, getting ready for all the CPU efficiencies in DB2 10 is easy. Your company may not be thinking about migrating to DB2 10 yet, but DB2 10 is getting huge industry wide support as the cost savings began to be realized by management. Everyone wants to save money and with everyone trying to squeeze their budgets, a faster adoption of DB2 10 is in your future. It is best to be prepared, so here are a few things you can do
in DB2 Version 8 that will make it easier to get to DB2 10.
Embrace System Managed Storage SMS
No, I am not Darth Vader telling you to embrace the dark side, but to let you know that DB2 10 catalog and directory use SMS. Using SMS is a hard thing to do for many companies and personally, I am not a real big fan of SMS. DB2 10 and additional improvements in storage have made me think twice and SMS may be more help than a problem. Being a performance geek, I recommend limiting or not sharing storage volumes within the DB2 SMS definitions. The SMS performance key, if possible, is to segregate the DB2 SMS definitions completely, yes completely, away from other storage definitions. SMS horror stories about the I/O troubles are well known and through my consulting clients I see SMS at the center of many I/O problems. Embrace SMS but be diligent guarding your DB2 system’s I/O performance and especially your DB2 catalog and directory data sets.
Sharpen Your Security
I am a big supporter of compliance, protecting data assets, security audits and all the security team’s efforts for protecting the company and all the business information. DB2 10 offers more privileges and authority granularity in your DB2 systems and security experts will need to embrace these features to separate out your data access, utility operations and day-to-day work. Start auditing your security usage of your systems area, various DBA support groups, different local and distributed applications, and analyze their needs. Do the activities require data access? Could the DB2 DBA work maybe been done using a different security level such as DB2 SYSOPR or another security authority? More DB2 10 security privileges and authority granularity improvements are going to affect your security profiles. Get ready and start the security analysis now.
Learn more about DB2 10 through the replay of Roger Miller’s and my 2010 webcast: DB2 10 for z/OS – Helping you improve operational efficiencies and gain competitive advantage. The webcast replay is available here…
After the original post of “Reminding Management of the Advantages of System Z,” many people commented on how their management is increasingly out of touch with the mainframe. Also, comments also stated that the System Z environment is really processing almost all of the transactions in their company and how all the Windows platform systems continue to have scalability issues.
Although the mainframe revenue for IBM suffered in 2009 because of its upgrade cycle, the introduction of the z10 System platform demonstrates it is the best open system. Yes, that is correct, the mainframe is the most open system available because it runs all types of workloads: the legacy standards of Assembler, COBOL, PL1 etc. but also C++, C#, Java, PHP and the rest of the languages that run on UNIX and Windows boxes.
Also, some people are starting to run “virtualized” windows on the mainframe environments. PCWORLD highlighted this capability in early 2009With the System Z speed, scalability and network, the mainframe continues to be the best solution for all types of workloads. A nice short demo of z/Vos is on YouTube along with many other videos that you can show to your iPhone-obsessed boss, demonstrating that consolidating those hundreds of MS SQL Server instances is also possible.
The story of virtualization continues to drive UNIX consolidation to the mainframe. In 2009 Allianz consolidated 60 servers into a single mainframe, saving substantial operating, licensing and energy costs while improving scalability. This story detailed in this ComputerWorld article is being repeated at many companies as the mainframe IFL, zIIP and zAAP specialty engines continue to bring processing power at PC or minimized prices. This consolidation activity has a very short term return on investment as these efforts pay for themselves usually in the first year and reduce power consumption dramatically making it a “green” project.
The next time the hundreds of windows or UNIX server configurations need an OS, database or other software upgrade make sure to mention how System Z is saving other companies time and operating costs and overall costs through consolidating these environments to the best cost alternative -- the mainframe System Z.
During the 2010 IBM Z Summit road show, there were several presentations detailing the mainframe platform advantages over UNIX and Windows platforms such as the lowest total cost of ownership, the best availability and unparalleled scalability. These presentations cut through the rumors with detailed facts and figures of different platform configurations. Download these presentations and distribute them to your management for a little reminder why the mainframe continues to be the best platform for your enterprise applications.
The Windows and UNIX platforms proponents always discount and minimize the total cost of ownership, availability and scalability topics. It is our duty to periodically remind management of the extra costs of these UNIX and Windows systems with their huge power consumption costs, software license fees, and software maintenance costs of working with several hundred or thousands of disparate systems. The mainframe quietly continues to process the majority of the transactions at the Fortune 500 companies and everyone, especially younger management types who think the world can run on an iPhone, needs to understand that the System Z infrastructure is the best backbone for any company.
The System Z mainframe is also evolving since it now has specialized processors such as the IFL, zIIP and zAAP to reduce overall operational and licensing costs. These specialty processors, along with the smaller configuration of the System Z offer a single small platform that can consolidate any number of UNIX workloads into a single footprint with a smaller greener energy footprint and better licensing configuration.
The presentations detail benchmarks, licensing fees and labor costs of various mainframe versus UNIX platform configurations. The figures show it sometimes takes double the number of processor cores on a UNIX configuration to start to scale out a configuration. Even more UNIX processors are required to achieve transaction rates that are still only performing one-fourth of what the mainframe System Z executes. These UNIX systems are also dedicated to the production transaction environment with no thought of supporting testing, QA or failover facilities that have yet to be priced or considered, features that come standard within the System Z environment.
System Z also continues to grow because of its faster chips. Ask any PC or UNIX platform personnel “what platform has the fastest clock speed processors” and you will quickly find out who keeps up with the industry information. The chip clock speeds of the System Z and other IBM platforms have improved like the rest of the PC industry. In fact, the System Z z10 chip operates at 4.4 GHz and comes in a 64-way quad core configuration that can speed up any application performance problem. This is almost twice as fast as the HP Superdome processors and a third faster than the Intel Nehalem chips.
So the mainframe continues to lead the industry. Does your management know the cost savings and performance figures of System Z? Tell them and show them the presentations before someone tries to “replace the mainframe” again with a more troublesome, power hunger, bad performing clustered iPhone configuration.
The previous Part 1 and Part 2 discussions highlighted and discussed the various DB2 Data Warehouse DSNZPARMS for z/OS within Versions 8 and 9. With the new DB2 Temporal Tables the system needs all the DB2 Data Warehouse DSNZPARMS to be enabled and all the other new and improved DB2 10 DSNZPARMS for maximizing performance. To ensure the new DB2 DSNZPARMS are enabled, below is a listing of the Depreciated, Improved or new DB2 DSNZPARMS to make sure your data warehouse and other applications get the best performance available within your DB2 10 environment.
Since the storage model of the new DB2 10 system has moved much of the processing components above the 2GB memory line, most of these new and improved DB2 DSNZPARMS deal with new maximum memory settings. If you are monitoring system paging, running on hardware that has enough memory or running on one of the new 196 hardware platforms, leverage the new DB2 memory capabilities and the new hardware as soon as possible.
Remember even with a number of the DB2 10 components moving above the 2GB memory bar, there are still many components (sometimes as much as 25%) below the bar. Monitor your system paging and the size of DB2 memory footprint and adjust your settings incrementally and carefully.
Remember that monitoring, analyzing, improving and repeating in small increments is the best way to provide the best DB2 Data Warehouse DSNZPARMS and a stable high performance environment.
To access the DB2 10 DSNZPARM data warehouse chart in PDF format, please go to DB2 Information on www.davebeulke.com.
Previously we talked about the first alphabetic group of DB2 data warehouse DSNZPARMS that can improve your access paths and overall application performance. This week the second set of DSNZPARMS are discussed. Many of the data warehouse DSNZPARMS discussed are somewhat hidden within the regular DSNZPARM install panels. All of these DSNZPARMS discussed are available in DB2 for z/OS DB2 Version 9. Some are available in DB2 Version 8 or DB2 Version 10.
Caution needs to be taken with all system settings and especially these data warehouse DSNZPARMS. These DSNZPARMS are meant to change access paths and improve them, but each data warehouse design is unique along with each application access path, so results will vary. If the data warehouse DB2 subsystem is shared with other OLTP or operational applications, I highly recommend fully documenting and setting up a full PLAN STABLITIY plan and package management structure for your current access paths before changing any DSNZPARMS. This documentation along with a good PLAN STABILITY DB2 plan and package management implementation and back out practices helps your environment quickly react and back out any detrimental access paths encountered through unexpected rebind of any program.
Some of the comments I’ve received regarding this issue highlighted the resurgent of data warehousing on the z/OS platform and why running a data warehouse on z/OS provides many advantages over other platforms. One that was noted from several people is when your data warehouse runs on z/OS, the huge ETL processes usually don’t have to transmit the data over a network. Even though the network bandwidth is robust, avoiding this extra bottleneck can sometimes save hours of extra overhead, guaranteeing that your refresh data jobs have enough time every day to provide critical refreshes of you data within your data warehouse.
Additionally most of your source data warehouse data comes from the z/OS operational systems and can quickly be put into operational business intelligence data warehouses. This fresh data increases sales, provides real time inventory or product availability updates and, the most important factor, removes latency for all your critical single point master data source of record for the enterprise.
Improve your system and application performance by adjusting these data warehouse DSNZPARMS to improve your access paths and by using the superior DB2 optimizer technology and most efficient performance available.
Previously, I’ve talked about the new DB2 10 temporal tables and how they are great for data warehousing applications. To leverage the temporal tables and data warehousing applications within the DB2 for z/OS environment the system needs the proper DSNZPARM settings. Since everyone has too many systems, some organizations only review system DSNZPARMs during migrations, meaning that many of the new settings might not be set up or enabled. It’s a good idea to review the DSNZPARMs on a regular basis and get them set up to maximize the performance of your data warehousing application SQL by leveraging all the DB2 optimizer capabilities.
First, get a listing of the current DSNZPARMs for your data warehouse system’s settings. This can be done a number of different ways: through your performance monitor, through the JCL that creates the DSNZPARMs or through the IBM REXX exec of the DB2 stored procedure DSNWZP. This DSNWZP stored procedure routine is one of the many DB2 administration procedures and is set up during the system install.
The data warehouse DSNZPARMs affect the performance of the DB2 system and its applications in many different ways, from the DB2 optimizer’s access path choices to the number of parallel processes DB2 will perform during data retrieval. To make sure that all DB2’s performance features and functions are available to improve performance I've created an alphabetic list of DB2 data warehouse DSNZPARMs. Making sure these data warehouse DSNZPARMs are enabled and adjusted for your system will help your application fully exploit DB2 and get the best performance possible for your data warehouse.
Well the votes are in, 22 billion rows is big enough data. It’s not the billions of web logs rows of a Google or Facebook but its big enough for everyone. One of the comments that struck me was that one in a million happens 22,000 times. So whatever your criterion is for big data, it is more a state of mind about the amount of data as opposed to the actual terabyte amounts or the number of rows. Regardless of what database systems you work with, big is a relative term. Just ask your SQL Server, Oracle and Sybase DBA friends what they consider a big system. Usually the answer is nowhere near what you get for DB2 z/OS or even DB2 LUW systems. I talked about this a last year in my blog ("Performance is Relative").
Other comments and questions received about last week’s blog asked for more clarification on the idea of keeping a database design simple. So below are three different ways to keep your big data data warehouse design simple.
First: There are reasons Bill Inmon’s and Ralph Kimball’s decentralized and centralized data warehouse ideas are so popular, those design patterns work. Design patterns for all types of IT applications, Java/.NET MVC (model view controller), various business models and standard processes have been extensively analyzed and endorsed over many years through the design pattern books, conferences and government studies. The decentralized and centralized data warehouse design patterns work and your design should use them for your data warehouse performance. Big data or not, there is no reason to do something more complex. Starting with these types of design patterns, using and optimizing simple Fact table(s) surrounded by Dimension tables(s) design pattern will provide you data warehouse performance. Decentralize or extend these design patterns with as many Fact tables and slow moving Dimension tables will optimize and minimize the amount of big data referenced in typical transaction and your data warehouse performance won’t be an issue.
Second: Make sure to normalize your big data design. It’s typical to try to consolidate everything within a data warehouse performance design. Unfortunately having too many elements in a table forces too much data into an application transaction and data warehouse performance can suffer. Just as decentralized and centralized data warehouse performance design patterns have been used for years, database table normalization has been around for even longer because it logically optimizes your design. The database design normalization process has been documented everywhere over the years and it is effective for making sure the table elements relate to the whole database table key(s). Combining table keys or designs causes excessive repeating data or data groups and over-normalization leads to excessive application joins. Normalization is striking a balance and no one does it perfectly the first time. Normalize your data warehouse performance design several times and your transaction performance can strike a balanced performance for all critical applications.
Third: Design, test, redesign test and repeat. Schedule enough database and application meetings and testing time to understand all the application transactions and reports. Data warehouse performance and modeling big data can get unwieldy, so testing your design early is vital. Sometimes big data table population numbers cause tools to abort. Cut the number of zeros down and model all your application transactions against your database design, build it and run applications against it. Data warehouse performance requires real life testing and actual running of the code or SQL that interfaces with the design. No one has any time to do it perfectly but everyone will be mad if you have to redesign and do it all over right before implementation. Know your performance before production through thorough testing. Big data and data warehouse performance requires design and testing. Make sure to do these several times during your development with as much big data as possible.
These are only some of the simple truths that insure your data warehouse performance for your big data system is a success.
Was 22 billion rows Big Data? As I began the new data warehouse project, I knew that the data warehouse performance needed to be designed from the very first data element. The new project was very interesting with complex calculations, cross referencing reports and Big Data implications and big data warehouse performance challenges. Many standard reports that affected whole industries and government policies were depending on data warehouse performance for timely reports and 100% not even 99.999% accuracy of every bit of the Big Data.
Another consideration was the implementation deadline for this high performance Big Data data warehouse system was only 120 days away and wasn’t movable. All of these reasons made this Big Data system quite the challenge. Our small team of business users, SQL analysts and technical staff were committed and we were successful in completing the implementation using agile scrum development techniques. The data warehouse performance was so improved that some of the processes went from 37 hours to under 7 minutes and some reports come back in seconds. Complex calculations and report verification done during our data quality testing found bugs in the old system’s existing reports that had been trusted for over 20 years, so we knew everything was 100%. Our overall performance, 100% accuracy and overall quality implementation were a testament to the commitment of a winning team effort.
There were many success factors that drove the team, the design, development and successful implementation of this 22 billion row Big Data system. Three success factors drove the data warehouse performance.
First: Keep it simple. Yes I know everyone talks about keeping it simple in every development and system design. Designing it and doing it simply can be much harder. Instead of endless discussions and meetings, design the big picture and then see how all the reports, every type of interface and users interact with the warehouse. Data warehouse performance is achieved through computing power. Simple designs that leverage parallelism, that drive I/O and calculation computing power for quickly producing reports are key.
Second: Leverage all existing infrastructure and processes. The tight 120-day schedule from start to finish did not allow the team to have issues, only time delayed solutions. Since we used an agile scrum type of development methodology the agile scrum stories and the data warehouse performance of the Big Data reports were tested several times during the short development period. All used the same standard corporate testing procedures, documentation and approval processes that had been used for years. The team contacted everyone that needed to guarantee the data warehouse performance, got them involved early and got their requirements, feedback, improvements and eventual approval for all the components and implementation. Everyone knew the Big Data project schedule ahead of time so all infrastructure and processes were ready. Issues were worked on until solutions were approved and implemented.
Third: Don’t be intimidated by Big Data. I have been fortunate to design, develop, deploy and study data warehouse performance since back in 1988 when I implemented my first “Big Data” system for the financial firm E.F. Hutton. At least back then it was considered Big Data and it seems that all the other data warehouse systems I have dealt with over the many years have had bigger data then the previous one. Big Data will keep getting bigger and it is just another couple of zeros on the end of your table population and design considerations. Data warehouse performance is built on the same principles as many years ago. As I mentioned with the first success factor, design for both I/O and CPU parallelism in every single item of your system. Only then can your Big Data system have the data warehouse performance it needs to speed up your processes and reports from hours to just minutes or even seconds making everyone, especially your users, happy
To figure out the best temporal table design aspects you need to think of the various options and considerations that will affect its performance. The most important aspect for your temporal table is the answers that your applications or users are expecting from it. The best way is to figure out the time aspect that the application is trying to capture. Are your applications looking for the financial value, the insurance coverage level, enrollment status, customer value or something else?
The temporal table status can be contingent on two types of settings: business time or the system processing time. If the processing is delayed and the system time is later than expected, does that affect your temporal table status? Or are you using the temporal table in a real time scenario where either the business or system time will affect the meaning of the data? There are many ways to respond to the situations and questions, but the design decision should be based on the application and user questions that need to be answered. So it is best to test both SYSTEM_TIME and BUSINESS_TIME scenarios out and see which design provides the best answers with the best performance.
The next design point is to figure out your timestamp type. Do your temporal table application answers require distinct timestamps throughout the system? Your DB2 10 system now has new capabilities to provide a column that is unique within the table system wide. This DB2 syntax is defined WITHOUT OVERLAPS and can be used for your temporal table only for your BUSINESS_TIME values. After the temporal table is created, an index is defined for it using your unique columns and the BUSINESS_TIME WITHOUT OVERLAPS keyword. BUSINESS_TIME is the only option the WITHOUT OVERLAPS keyword works with.
When BUSINESS_TIME WITHOUT OVERLAPS is specified, the columns of the BUSINESS_TIME period must not be specified as part of the constraint. The specification of BUSINESS_TIME WITHOUT OVERLAPS adds the following to the constraints:The end column of the BUSINESS_TIME period in ascending orderThe start column of the BUSINESS_TIME period in ascending orderThe minimum value of a TIMESTAMP(12), the value is 0001-01-01-00:00:00.000000000000The maximum value of a TIMESTAMP(12), the value is 9999-12-31-24:00:00.000000000000
For DATE the minimum is 0001-01-01 and the maximum value is 9999-12-31.
A system generated check constraint named DB2_GENERATED_CHECK_CONSTRAINT_FOR_BUSINESS_TIME is also generated this definition process to ensure that the value for end-column-name is greater than the value for start-column-name. BUSINESS_TIME WITHOUT OVERLAPS must not be specified for a PARTITIONED index.
There are a number of considerations when creating your DB2 10 temporal table. When your application needs it to be unique, the system wide the BUSINESS_TIME option provides the capabilities with some cautions. Check out other posts on temporal tables via Developer Works or at my site, www.davebeulke.com.
It is usually pretty easy to quickly implement new DB2 features. DB2 makes it easy to improve your database performance with a new zParm or table space definition. Unlike most of these new features, however, DB2 temporal tables need research before you implement them. DB2 temporal tables offer great flexibility and many data warehouse design options that can be leveraged very effectively--or be abused--with the wrong application design.
To prepare you need to evaluate whether the application is appropriate for temporal tables. First with DB2 temporal tables it is even more important to determine the frequency of the inserts, updates and deletes that are going to happen. Frequencies are always a good design point for any application but it is especially important for DB2 temporal tables because of the way BUSINESS_TIME or SYSTEM_TIME is maintained and how all the data changes are captured within the associated history temporal table. Every data change could really be two processes because rows need to be replicated into your history table. That could be a major performance consideration.
The next research points are the restrictions with DB2 temporal tables and history tables. The temporal table must be a regular table with the added BUSINESS_TIME or SYSTEM_TIME. No clone table capabilities, column masks, row permissions or security label columns are allowed. The same restrictions are in place for the history table.
The temporal table and its associated history table must be kept in sync. Restrictions also exist regarding the altering, adding or removing columns into the temporal table and the history table to guarantee integrity. Also backup and recovery for the temporal table and its history table must be kept in sync and there are restrictions around DB2 Utilities that could delete data from these tables. In addition once the history table is defined, its table space or table cannot be dropped. So make sure the columns desired in your DB2 temporal tables are stable and well defined for the application.
There are several resources that should be reviewed before designing your first application using DB2 temporal tables. The first is the IBM DB2 10 manuals. By reading these friendly manuals you get a good understanding of the syntax, various examples and details about all the restrictions. Next there have been some presentations about DB2 temporal tables at past IDUG conferences and IOD conferences. Track down you colleagues that went to the conferences and get the CD or access to the website for the presentation downloads.
The improvements in DB2 10 technology are definitely making building performance into business intelligence systems easier. Additional SQL data warehousing extensions such as Rank, Dense_Rank, and Materialized Query tables (MQTs) were great when they came out in DB2 8 and DB2 9. DB2 10 takes another giant step with data warehousing additional features.
Use a DB2 10 Temporal Table
DB2 10 provides Temporal and Bi-Temporal database tables along with the OLAP capabilities of moving sums, averages and aggregates. These additional improvements within the DB2 engine, SQL and table designs provide faster result set capabilities for overall business intelligence performance. Also the DB2 10 enhancements with the new scalar and table functions can provide performance for all types of OLAP activities.
Since these business intelligence performance features are built directly into the DB2 10 engine, temporal designs, moving sums, averages and aggregates will become standard calculations and properties for all reporting requirements. Calculations will be done using different groups of temporal time periods or location-based data for product sales, store location or other common data warehouse criteria. Data warehousing performance will improve dramatically as DB2 data warehouse applications are quickly designed to leverage these common built-in features and functions for all types of business intelligence performance requirements.
By using a temporal data warehouse design along with partitioning, time frame or common table design, the standard OLAP functions can provide quick calculations for complex or simple data warehouse reporting requirements. Also given the improvements within SQL, moving sums, averages and aggregates can be included in common table expressions, SQL SELECT lists or ORDER BY statements to satisfy any application requirements
Leverage DB2 10 for Business Intelligence Performance
Business Intelligence performance always improves when the designers and developers leverage the DB2 engine built in functions. Use temporal tables, bi-temporal, moving sums, averages and aggregates so your business intelligence performance will only get better with DB2 10.
Corporations build business intelligence and data warehouse systems to gain competitive advantage. DB2 helps you create business intelligence and data warehouse systems that give the performance corporations need by providing many industry leading features.
Combine DB2 MQTs for DB2 Business Intelligence Databases
In other posts I talked about DB2 business intelligence and data warehouse designs that combine multiple DB2 materialized query tables (DB2 MQTs). Combining these DB2 MQTS with different Weekly, Monthly, Quarterly and Yearly data provides a great way to quickly get reporting information.
But business intelligence data warehouse systems are more than just providing a platform to total sales figures quickly. As an IBM commercial pointed out, business intelligence is about getting a deeper understanding of your business. Your data warehouse design needs to be able to provide the extra data or information that provides context, comparisons and a deeper meaning to data.
Drill Down for Requirements for Your DB2 Business Intelligence and Data Warehousing Systems
Understanding the flow of the sales totals, noticing whether sales trends are improving or declining is only the beginning. The ability to drill into your data warehouse information quickly and meaningfully is the start of a business intelligence system. Being able to drill through big department categories and then also analyze to the tiniest pixel color levels within unstructured data items is becoming part of data warehouse requirements for new systems.
When you are designing your new DB2 data warehouse business intelligence system, don’t stop with the major questions that are going to be asked. Follow up to uncover additional questions that will be asked after the first question gets answered. After you have uncovered four or five levels of questions, you’re ready to start the DB2 business intelligence data warehouse design. This way your system can provide better value and deep insights into improving the business.
As the European IDUG conference approaches, it’s good to remember the outstanding technical experiences discussed at IDUG with DB2 for z/OS and LUW from many user companies. The knowledge is from users, consultants and IBMers who have implemented systems and applications and have that extra in-depth knowledge of the multiple versions of DB2, including DB2 10 from the beta program. The presentations always provide a little extra tidbit that helps you avoid problems because speakers discuss the details of their scars from the front lines.
At IDUG North America, all the DB2 10 beta users’ presentations was really great. Most of the DB2 10 beta customers gave their impressions and testing results of DB2 10. They demonstrated why DB2 10 would be rolling into production environments soon. Some companies were even planning on consolidating DB2 systems or data sharing members because of the improved memory capabilities and since a single DB2 10 system can now handle almost 10 times more users as previous versions.
Several companies were evaluating DB2 10 against SAP and PeopleSoft applications. The testing of these applications has gone well and the companies were waiting for SAP and Oracle to get back to them with their certification of DB2 10. It seems the beta customers are testing more extensively and faster than these software companies. It really doesn’t surprise me that Oracle would be late to certifying their application since Oracle continues to lose customers due to the improvements on the mainframe and other versions of DB2.
Also the talk at the North American IDUG conference was that many Oracle shops that also have DB2 LUW are converting a large number of their Oracle instances to DB2 because of the crazy Oracle priced contracts. Saving money with DB2 LUW over Oracle continues as companies look at the performance and reliability of DB2. Expect that trend to continue as DB2 10 rolls out and companies are even consolidating their Oracle instances into their DB2 for z/OS environments.
I look forward to getting the updates at IDUG Europe.
As the 2011 European IDUG approaches, it’s good to remember the annual North American IDUG conference in May where the latest and greatest information about DB2 was unveiled. The IDUG Board of Directors and especially the Conference Planning Committee did a great job providing qualified users, consultants and IBM developers as speakers to discuss their technical experiences with DB2 for z/OS and LUW, including the new release of DB2 10. DB2 performance tuning, application development and DBA topics filled the agenda and everyone learned the latest techniques to help their companies gain the competitive advantage
DB2 10 Beta program was the largest with testing by 24 companies and the most successful testing period ever for a new DB2 version.
The DB2 10 for z/OS software is only about a year old and is definitely being tested by an ever growing number of companies. The very nature of a new release requires the regular introduction of the maintenance for both z/OS and DB2 for the best stable environment and to fully leverage its performance tuning improvements. It is recommended that companies planning to go to DB2 10 should plan on at least quarterly maintenance cycles to fully incorporate all APARs, Hipers, and other fixes or enhancements.
DB2 10 continues to be enhanced due to some of the issues discovered during the beta program and other items that needed additional work. The DB2 for z/OS Version 9 Plan Stability feature is getting a lot of attention with its ability to help with production package access path preservation. The DB2 10 additional features of APCOMPARE and APREUSE are still being worked on. Even if these new enhancements don’t make it in the release, Plan stability continues to be a great feature introduced in DB2 for z/OS Version 9 for guaranteeing that access paths stay the same or improve with every new developer version.
For those of you that are looking for the DB2 10 White Paper the sign up link for the free download of the PDF from the IBM website is below. You have to sign up because they want to know how many people download the DB2 10 White Paper, but it’s free and you can opt out of any email spam. So download it today at:https://www14.software.ibm.com/webapp/iwm/web/signup.do?source=sw-infomgt&S_PKG=db2_zos_reduce_costs