Within the zEnterprise announcement in July 2010 there were several exciting
comments about the new query performance within the IBM Smart Analytics
Optimizer. This smart system leverages the integrated platform, providing
industry leading scalability for data warehouse and business intelligence
Queries that scanned entire terabytes of data that were previously avoided
can now be executed in seconds through the IBM Smart Analytics Optimizer
processing. This capability provides data mining features for quickly getting
answers to the clustering, associations, classification and predictions within
your data warehousing environment.
Within some of the documentation, IBM testing of the Smart Analytics
Optimizer shows huge performance boosts. Queries that scanned and then
subsequently optimized sometimes showed an improvement of 54 times, the cost of
the query 711 times. This type of improvement is very exciting and especially
what new data warehousing and business intelligence workloads need on the
Check out all the
zEnterprise documentation—especially the information on the IBM Smart Analytics
Optimizer. This was just the beginning of a resurgence of the new mainframe
that supports any mainframe, UNIX or windows workload. Known as a company’s
“private cloud,” it allows IT departments to leverage tremendous performance
improvements while reducing query cost substantially.
The zEnterprise IBM computer was announced in July 2010 and it can run any workload. Yes any workload! Your company can now run mainframe, UNIX and some Windows applications within the new environment. The new zEnterprise is designed with performance and capacity for large-scale consolidation of any workload. As has been done recently with consolidating UNIX systems into the mainframe environment, it’s now possible to consolidate the myriad of Windows based systems with the integration of z/VM 6.1. This new computing architecture helps consolidate performance, integrate data better and improve overall management of the computing environment with unified standard performance security and optimization facilities for better overall availability.
The zEnterprise takes consolidation one step further as any workload can be implemented within its standard environment with dramatic energy savings using workload optimizers through zEnterprise’s integration of the IBMPOWER7 and IBM System x blades, allowing the consolidation of diverse application workloads.
This helps deliver the mainframe’s reliability, availability and security to these other platforms while helping your company lower risk, overall computing and energy costs. The new architecture also helps centralize all the data from these diverse platforms to a central hub, eliminating the many copies and remote islands of data. This improves integration, latency and performance as heterogeneous architectures perform more end-to-end enterprise transactions.
Consolidated computing through the new zEnterprise system is just beginning. Now all architectures, systems and applications can truly be evaluated for their performance. Regardless of the operating system, programming language or network protocol, your company can get it all done with the lowest total cost of ownership with 96 of the potentially newest and fastest CPUs (5.2GHz) available on any computer in the world through the new zEnterprise system.
Check out all the zEnterprise information at the links below and the interesting comments about z/VM from PCWorld. Then imagine all your enterprise data integrated into a high performance, energy efficient and optimized system.ibm.com/systems/zenterprise/
Within all database systems there are tables that hold transactions or some type of data based on time. These transactions unfortunately build up over time to handle regulatory compliance, infrequent access or support development of historical and trending reports.
Within your database design, this time-based transaction data builds up quickly and managing it so the volume does not affect performance is critical. Sometimes the frequency or grain of transactions such as web clicks, banking transactions, stock transactions or other transactions build up to terabytes of data per day, quickly overwhelming robust storage arrays.
Separating out the real time transaction data versus the old data within your database requires planning and design steps. These planning and design steps help the separation of the old data versus new data and guarantee application and SQL performance does not suffer when your database is fully populated A separation of the old and new data also helps performance management so more resources can be delegated to maintain transaction performance for business operational success.
DB2 10 Temporal Tables with their built-in functionality automatically understand the business time or system time that the data was entered into the system. This functionality is great for finding out the condition of the business as of a certain time. There are two new BUSINESS_TIME and SYSTEM_TIME table period definitions columns available. These new time period column definitions are used for the new DB2 temporal table definitions to provide system-maintained, period-maintained or bi-temporal (when both system and period maintained) database tables.
Many systems today have manual applications or utilities which migrate their real time data to history tables. These new temporal tables with their built-in system and business time columns can be used in conjunction with a user-defined trigger to automatically migrate transactional temporal table data to another user defined HISTORY table. Having these facilities built into the database greatly improves regulatory compliance, operations and overall performance.
More information can be found on DB2 10 Temporal Tables and other DB2 10 features in the IBM White Paper: IBM DB2 10 for z/OS Beta -- Reduce Costs with Improved Performance.
The North American IDUG Conference and the IOD EMEA conference highlighted all the functionality and features within DB2 10. The more you hear about these functions and features, the more you realize they are really going to save your company money right away.
Justifying a DB2 upgrade is always necessary for management and for the new DB2 10 it is going to be a lot easier with its many CPU-saving features. With all the new features saving from 5-7% right out of the box in conversion mode, DB2 10 offers existing systems and applications immediate savings. Add in all the new options such as Hash Access, Index Include Columns and other features and developer and DBA activities become a lot easier, reducing your overall DB2 CPU capacity requirements possibly up to 20%. Think of getting a 20% decrease in your costs and the costs of the associated third party software and you can see why management is going to push for getting DB2 10 installed as soon as possible.
One of first ways DB2 10 reduces your CPU within your systems and applications, is through a new access path called Range-List Index Scan. This new access path should immediately assist many of your applications that have multiple WHERE predicates that reference the same index. In DB2 9 the access paths scanned the index for each of the WHERE predicate references. Within DB2 10 the Range-List Index Scan scans the index only once to retrieve all the RIDs, saving the extra CPU and I/Os of multiple separate scans, RID consolidation and elimination of duplicate RID entries. This can be a huge improvement for applications that have paging or other types of searching SQL WHERE predicates
When your applications are being prepared for migration to DB2, the plan stability features started in Version 9 and improved with DB2 10 will assist you. Within DB2 10 this enhancement lets the application manager lock down any package for a particular access path. If the access path is changed through an attempt to rebind the package, the process can supply an error or provide a warning within the Bind process. This protects your access paths against access path problems and alerts your developers to a possible problem.
So if you want a 20% decrease in your CPU demand or just want to start thinking about staying up with your competition, start your plans for DB2 10 now
Dave Beulke is an internationally recognized DB2 consultant, DB2 training and DB2 education instructor. Dave helps his clients improve their strategic direction, dramatically improve DB2 performance and reduce their CPU demand, saving millions in their systems, databases and application areas within their mainframe, UNIX and Windows environments.
DB2 upgrades are not the same in any two companies and may not even be the same within the same company. As multiple DB2 systems sometimes support different applications with varying requirements, the Return on Investment (ROI) is completely different. Also, comparing DB2 z/OS or DB2 LUW to Oracle system upgrades also does not even begin to be similar or comparable as all these databases have completely different contract clauses, ROI requirements and upgrade factors. Apples and oranges are both edible, but that is where the comparison and similarities stop.
DB2 upgrades depend on the technology. First-hand experience has shown that bad research, poor database design, poor application development techniques, bad choices in third party vendor tools and an assortment of other factors make ROI equations for upgrades complex or almost impossible. Having helped fix bad technology decisions, redesigned bad database designs and fixed SQL application performance problems, I know that reducing costs is always on everyone’s mind. Upgrading any software especially raises cost consciousness.
To minimize upgrade costs, strive for better understanding and knowledge of the DB2 technology. This will help everyone avoid unnecessary costs and bad decisions. Work on minimizing the DB2 10 upgrade costs through better management of your DB2 systems today. I’ve mentioned the considerations of SMS environments and improving your security definitions to get ready to separate security data access.
This week the suggestion for preparing for DB2 10 is to begin analyzing your DB2 application techniques. The first thing to analyze is how your plans and packages are put together. IBM’s DB2 engineers have been telling everyone to get to DB2 packages within their environment for many releases. In DB2 10 the old plan management technique of DBRMs going directly into plans needs to be replaced by collecting them within COLLIDs and then binding into your DB2 plans. It is a small difference but it has big implications. If your shop still uses the technique of putting DBRMs directly into plans, it is time to start a process to migrate to the new method.
Next, the compliers that your applications utilize also need to be current. DB2 10 changes within the COBOL pre-compiler generate new SQLCA and SQLDA structures along with generating COMP-5 binary data attributes instead of COMP-4 or COMP. Even though there are still some ways to handle old compilers it is best to review your COBOL compiler versions to make sure they will be ready for your eventual DB2 migration.
Dave Beulke is an internationally recognized DB2 consultant, DB2 trainer and education instructor. Dave helps his clients improve their strategic direction, dramatically improve DB2 performance and reduce their CPU demand saving millions in their systems, databases and application areas within their mainframe, UNIX and Windows environments.
Whether you are currently using DB2 Version 8 or Version 9, getting ready for all the CPU efficiencies in DB2 10 is easy. Your company may not be thinking about migrating to DB2 10 yet, but DB2 10 is getting huge industry wide support as the cost savings began to be realized by management. Everyone wants to save money and with everyone trying to squeeze their budgets, a faster adoption of DB2 10 is in your future. It is best to be prepared, so here are a few things you can do
in DB2 Version 8 that will make it easier to get to DB2 10.
Embrace System Managed Storage SMS
No, I am not Darth Vader telling you to embrace the dark side, but to let you know that DB2 10 catalog and directory use SMS. Using SMS is a hard thing to do for many companies and personally, I am not a real big fan of SMS. DB2 10 and additional improvements in storage have made me think twice and SMS may be more help than a problem. Being a performance geek, I recommend limiting or not sharing storage volumes within the DB2 SMS definitions. The SMS performance key, if possible, is to segregate the DB2 SMS definitions completely, yes completely, away from other storage definitions. SMS horror stories about the I/O troubles are well known and through my consulting clients I see SMS at the center of many I/O problems. Embrace SMS but be diligent guarding your DB2 system’s I/O performance and especially your DB2 catalog and directory data sets.
Sharpen Your Security
I am a big supporter of compliance, protecting data assets, security audits and all the security team’s efforts for protecting the company and all the business information. DB2 10 offers more privileges and authority granularity in your DB2 systems and security experts will need to embrace these features to separate out your data access, utility operations and day-to-day work. Start auditing your security usage of your systems area, various DBA support groups, different local and distributed applications, and analyze their needs. Do the activities require data access? Could the DB2 DBA work maybe been done using a different security level such as DB2 SYSOPR or another security authority? More DB2 10 security privileges and authority granularity improvements are going to affect your security profiles. Get ready and start the security analysis now.
Learn more about DB2 10 through the replay of Roger Miller’s and my 2010 webcast: DB2 10 for z/OS – Helping you improve operational efficiencies and gain competitive advantage. The webcast replay is available here…
After the original post of “Reminding Management of the Advantages of System Z,” many people commented on how their management is increasingly out of touch with the mainframe. Also, comments also stated that the System Z environment is really processing almost all of the transactions in their company and how all the Windows platform systems continue to have scalability issues.
Although the mainframe revenue for IBM suffered in 2009 because of its upgrade cycle, the introduction of the z10 System platform demonstrates it is the best open system. Yes, that is correct, the mainframe is the most open system available because it runs all types of workloads: the legacy standards of Assembler, COBOL, PL1 etc. but also C++, C#, Java, PHP and the rest of the languages that run on UNIX and Windows boxes.
Also, some people are starting to run “virtualized” windows on the mainframe environments. PCWORLD highlighted this capability in early 2009With the System Z speed, scalability and network, the mainframe continues to be the best solution for all types of workloads. A nice short demo of z/Vos is on YouTube along with many other videos that you can show to your iPhone-obsessed boss, demonstrating that consolidating those hundreds of MS SQL Server instances is also possible.
The story of virtualization continues to drive UNIX consolidation to the mainframe. In 2009 Allianz consolidated 60 servers into a single mainframe, saving substantial operating, licensing and energy costs while improving scalability. This story detailed in this ComputerWorld article is being repeated at many companies as the mainframe IFL, zIIP and zAAP specialty engines continue to bring processing power at PC or minimized prices. This consolidation activity has a very short term return on investment as these efforts pay for themselves usually in the first year and reduce power consumption dramatically making it a “green” project.
The next time the hundreds of windows or UNIX server configurations need an OS, database or other software upgrade make sure to mention how System Z is saving other companies time and operating costs and overall costs through consolidating these environments to the best cost alternative -- the mainframe System Z.
During the 2010 IBM Z Summit road show, there were several presentations detailing the mainframe platform advantages over UNIX and Windows platforms such as the lowest total cost of ownership, the best availability and unparalleled scalability. These presentations cut through the rumors with detailed facts and figures of different platform configurations. Download these presentations and distribute them to your management for a little reminder why the mainframe continues to be the best platform for your enterprise applications.
The Windows and UNIX platforms proponents always discount and minimize the total cost of ownership, availability and scalability topics. It is our duty to periodically remind management of the extra costs of these UNIX and Windows systems with their huge power consumption costs, software license fees, and software maintenance costs of working with several hundred or thousands of disparate systems. The mainframe quietly continues to process the majority of the transactions at the Fortune 500 companies and everyone, especially younger management types who think the world can run on an iPhone, needs to understand that the System Z infrastructure is the best backbone for any company.
The System Z mainframe is also evolving since it now has specialized processors such as the IFL, zIIP and zAAP to reduce overall operational and licensing costs. These specialty processors, along with the smaller configuration of the System Z offer a single small platform that can consolidate any number of UNIX workloads into a single footprint with a smaller greener energy footprint and better licensing configuration.
The presentations detail benchmarks, licensing fees and labor costs of various mainframe versus UNIX platform configurations. The figures show it sometimes takes double the number of processor cores on a UNIX configuration to start to scale out a configuration. Even more UNIX processors are required to achieve transaction rates that are still only performing one-fourth of what the mainframe System Z executes. These UNIX systems are also dedicated to the production transaction environment with no thought of supporting testing, QA or failover facilities that have yet to be priced or considered, features that come standard within the System Z environment.
System Z also continues to grow because of its faster chips. Ask any PC or UNIX platform personnel “what platform has the fastest clock speed processors” and you will quickly find out who keeps up with the industry information. The chip clock speeds of the System Z and other IBM platforms have improved like the rest of the PC industry. In fact, the System Z z10 chip operates at 4.4 GHz and comes in a 64-way quad core configuration that can speed up any application performance problem. This is almost twice as fast as the HP Superdome processors and a third faster than the Intel Nehalem chips.
So the mainframe continues to lead the industry. Does your management know the cost savings and performance figures of System Z? Tell them and show them the presentations before someone tries to “replace the mainframe” again with a more troublesome, power hunger, bad performing clustered iPhone configuration.
The previous Part 1 and Part 2 discussions highlighted and discussed the various DB2 Data Warehouse DSNZPARMS for z/OS within Versions 8 and 9. With the new DB2 Temporal Tables the system needs all the DB2 Data Warehouse DSNZPARMS to be enabled and all the other new and improved DB2 10 DSNZPARMS for maximizing performance. To ensure the new DB2 DSNZPARMS are enabled, below is a listing of the Depreciated, Improved or new DB2 DSNZPARMS to make sure your data warehouse and other applications get the best performance available within your DB2 10 environment.
Since the storage model of the new DB2 10 system has moved much of the processing components above the 2GB memory line, most of these new and improved DB2 DSNZPARMS deal with new maximum memory settings. If you are monitoring system paging, running on hardware that has enough memory or running on one of the new 196 hardware platforms, leverage the new DB2 memory capabilities and the new hardware as soon as possible.
Remember even with a number of the DB2 10 components moving above the 2GB memory bar, there are still many components (sometimes as much as 25%) below the bar. Monitor your system paging and the size of DB2 memory footprint and adjust your settings incrementally and carefully.
Remember that monitoring, analyzing, improving and repeating in small increments is the best way to provide the best DB2 Data Warehouse DSNZPARMS and a stable high performance environment.
To access the DB2 10 DSNZPARM data warehouse chart in PDF format, please go to DB2 Information on www.davebeulke.com.
Previously we talked about the first alphabetic group of DB2 data warehouse DSNZPARMS that can improve your access paths and overall application performance. This week the second set of DSNZPARMS are discussed. Many of the data warehouse DSNZPARMS discussed are somewhat hidden within the regular DSNZPARM install panels. All of these DSNZPARMS discussed are available in DB2 for z/OS DB2 Version 9. Some are available in DB2 Version 8 or DB2 Version 10.
Caution needs to be taken with all system settings and especially these data warehouse DSNZPARMS. These DSNZPARMS are meant to change access paths and improve them, but each data warehouse design is unique along with each application access path, so results will vary. If the data warehouse DB2 subsystem is shared with other OLTP or operational applications, I highly recommend fully documenting and setting up a full PLAN STABLITIY plan and package management structure for your current access paths before changing any DSNZPARMS. This documentation along with a good PLAN STABILITY DB2 plan and package management implementation and back out practices helps your environment quickly react and back out any detrimental access paths encountered through unexpected rebind of any program.
Some of the comments I’ve received regarding this issue highlighted the resurgent of data warehousing on the z/OS platform and why running a data warehouse on z/OS provides many advantages over other platforms. One that was noted from several people is when your data warehouse runs on z/OS, the huge ETL processes usually don’t have to transmit the data over a network. Even though the network bandwidth is robust, avoiding this extra bottleneck can sometimes save hours of extra overhead, guaranteeing that your refresh data jobs have enough time every day to provide critical refreshes of you data within your data warehouse.
Additionally most of your source data warehouse data comes from the z/OS operational systems and can quickly be put into operational business intelligence data warehouses. This fresh data increases sales, provides real time inventory or product availability updates and, the most important factor, removes latency for all your critical single point master data source of record for the enterprise.
Improve your system and application performance by adjusting these data warehouse DSNZPARMS to improve your access paths and by using the superior DB2 optimizer technology and most efficient performance available.