Corporations build business intelligence and data warehouse systems to gain competitive advantage. DB2 helps you create business intelligence and data warehouse systems that give the performance corporations need by providing many industry leading features.
Combine DB2 MQTs for DB2 Business Intelligence Databases
In other posts I talked about DB2 business intelligence and data warehouse designs that combine multiple DB2 materialized query tables (DB2 MQTs). Combining these DB2 MQTS with different Weekly, Monthly, Quarterly and Yearly data provides a great way to quickly get reporting information.
But business intelligence data warehouse systems are more than just providing a platform to total sales figures quickly. As an IBM commercial pointed out, business intelligence is about getting a deeper understanding of your business. Your data warehouse design needs to be able to provide the extra data or information that provides context, comparisons and a deeper meaning to data.
Drill Down for Requirements for Your DB2 Business Intelligence and Data Warehousing Systems
Understanding the flow of the sales totals, noticing whether sales trends are improving or declining is only the beginning. The ability to drill into your data warehouse information quickly and meaningfully is the start of a business intelligence system. Being able to drill through big department categories and then also analyze to the tiniest pixel color levels within unstructured data items is becoming part of data warehouse requirements for new systems.
When you are designing your new DB2 data warehouse business intelligence system, don’t stop with the major questions that are going to be asked. Follow up to uncover additional questions that will be asked after the first question gets answered. After you have uncovered four or five levels of questions, you’re ready to start the DB2 business intelligence data warehouse design. This way your system can provide better value and deep insights into improving the business.
working with a client’s SOA environment showed several interesting DB2 performance
issues. One DB2 performance issue that was quite stunning was the large number
of connections that the .Net and Java applications were making to DB2 and other
systems. Researching the system and application further uncovered a wide
disparity in the handling and the amount of connections each of the many
application modules were using. Proper connection handling is very important to
DB2 performance because of three main reasons: acquiring new connections is
expensive, application connections maintain database locks and connections are
unit of work transactions.
getting a database connection is expensive because of all those great things
that a database provides such as security and integrity. Since the database is
important, every connection request must have its security checked and be
authorized. This security authorization against the database system and the
data desired is quick, but takes time. Next, when database processing
guarantees integrity, it is through its transaction logging of the unit of
work. Starting a new database unit of work again is fast but must be managed
within the database so that it can be backed out should the transaction
within the database connection, the SQL processing selects, inserts, updates, or
deletes data. These actions holds locks against the data referenced and prevent
applications from trying to update the same data or reference the same deleted
data. The DB2 applications have several mechanisms to control and handle this
locking within the application and system. The best way for DB2 performance is
to Bind the application against the database is using the Bind parameters for
cursor stability ISOLATION(CS) and CURRENTDATA(NO). This minimizes the
immediate locks held and allows other transactions more concurrency to the
data. If the application is read only and is not concerned with other
transaction manipulating the data then use uncommitted read ISOLATION(UR).
Using the ISOLATION(UR) setting is preferred for application referencing data
that doesn’t change.
application unit of work must maintain the connection. Large application
workloads that perform too many updates, inserts or deletes within a unit of
work hold on to too many locks and can cause extended back out times when an application
fails and impact DB2 performance. It is very important to have the proper
transaction commit scope, issue appropriate commits to minimize the amount of
locks and amount of work that the database may have to back out. It is also
critical for the applications to reference the database tables and perform
their updates in the same sequence. Referencing the data in same order acquires
and releases locks synchronously, allowing more application concurrency. Since
your application wants to minimize the number of locks and the time those locks
are held, it is always best to do your data updates and inserts right before
your application performs a commit or ends your transaction. This minimizes the
time the locks are held and again provides more workload concurrency.
of this information is standard practice for most applications, within the new
SOA architectures the services may not know much about the unit of work or
connection situation. Within one client SOA architecture, recent research showed
that a particular module had seven different connections active within its
service. The service had several connections; DB2 for z/OS, DB2 for LUW,
Oracle, MQ series inbound and outbound Queues and connections to application
and web servers for AJAX activities. It is a bit much to have all of these
connections within a single service and when some minor changes caused this
module to fail many processes could not function. Also debugging was very
difficult because one connection failure caused all the connection participants
to back out their transactions, causing more locking and data integrity issues.
sure your application handles connections properly because they are expensive
to acquire and impact DB2 performance. Minimize the number of database
activities within a transaction to minimize the locks and understand the number
of connections that are involved within a particular unit-of-work so that you
can get the best DB2 performance from your applications.
Dave Beulke is an
internationally recognized DB2 consultant, DB2 training and DB2 education
instructor.Dave helps his clients improve their strategic direction,
dramatically improve DB2 performance and reduce their CPU demand, saving millions
in their systems, databases and application areas within their mainframe, UNIX
and Windows environments. Search for more of Dave Beulke’s DB2 Performance
blogs on DeveloperWorks and at www.davebeulke.com.
Previously, I’ve talked about the new DB2 10 temporal tables and how they are great for data warehousing applications. To leverage the temporal tables and data warehousing applications within the DB2 for z/OS environment the system needs the proper DSNZPARM settings. Since everyone has too many systems, some organizations only review system DSNZPARMs during migrations, meaning that many of the new settings might not be set up or enabled. It’s a good idea to review the DSNZPARMs on a regular basis and get them set up to maximize the performance of your data warehousing application SQL by leveraging all the DB2 optimizer capabilities.
First, get a listing of the current DSNZPARMs for your data warehouse system’s settings. This can be done a number of different ways: through your performance monitor, through the JCL that creates the DSNZPARMs or through the IBM REXX exec of the DB2 stored procedure DSNWZP. This DSNWZP stored procedure routine is one of the many DB2 administration procedures and is set up during the system install.
The data warehouse DSNZPARMs affect the performance of the DB2 system and its applications in many different ways, from the DB2 optimizer’s access path choices to the number of parallel processes DB2 will perform during data retrieval. To make sure that all DB2’s performance features and functions are available to improve performance I've created an alphabetic list of DB2 data warehouse DSNZPARMs. Making sure these data warehouse DSNZPARMs are enabled and adjusted for your system will help your application fully exploit DB2 and get the best performance possible for your data warehouse.
After the original post of “Reminding Management of the Advantages of System Z,” many people commented on how their management is increasingly out of touch with the mainframe. Also, comments also stated that the System Z environment is really processing almost all of the transactions in their company and how all the Windows platform systems continue to have scalability issues.
Although the mainframe revenue for IBM suffered in 2009 because of its upgrade cycle, the introduction of the z10 System platform demonstrates it is the best open system. Yes, that is correct, the mainframe is the most open system available because it runs all types of workloads: the legacy standards of Assembler, COBOL, PL1 etc. but also C++, C#, Java, PHP and the rest of the languages that run on UNIX and Windows boxes.
Also, some people are starting to run “virtualized” windows on the mainframe environments. PCWORLD highlighted this capability in early 2009With the System Z speed, scalability and network, the mainframe continues to be the best solution for all types of workloads. A nice short demo of z/Vos is on YouTube along with many other videos that you can show to your iPhone-obsessed boss, demonstrating that consolidating those hundreds of MS SQL Server instances is also possible.
The story of virtualization continues to drive UNIX consolidation to the mainframe. In 2009 Allianz consolidated 60 servers into a single mainframe, saving substantial operating, licensing and energy costs while improving scalability. This story detailed in this ComputerWorld article is being repeated at many companies as the mainframe IFL, zIIP and zAAP specialty engines continue to bring processing power at PC or minimized prices. This consolidation activity has a very short term return on investment as these efforts pay for themselves usually in the first year and reduce power consumption dramatically making it a “green” project.
The next time the hundreds of windows or UNIX server configurations need an OS, database or other software upgrade make sure to mention how System Z is saving other companies time and operating costs and overall costs through consolidating these environments to the best cost alternative -- the mainframe System Z.
In this post, I’ll delve a little deeper into the system architecture and CPU reduction opportunities found in a major DB2 system at a large financial institution. (To see first part of case study, click here
In order to do a complete performance analysis of the system, the system and application statistics were reviewed using the standard DB2 performance reports. This data provided a basis of the various system, database, application and SQL observations and improvement recommendations. These statistics along with system, process, and application documentation, interviews with application programmers and observations of the workload guided the investigation of the CPU consumption and CPU reduction effort.Current Enterprise Architecture
The enterprise architecture had evolved over the years to support many diverse database systems. This caused several databases to be cloned and their transactions workloads to be intermixed. This combination of CICS transactions provided a diverse workload of different data requirements, run time durations and application types.
This combination of workloads runs on a single CPU mainframe environment that supports both the test and production environments. Workloads come into the system through a variety of interfaces: CICS, Visual Basic and Web applications using MQ Series Connect and batch jobs throughout the day. These applications access a variety of database tables that support the corporation’s nation-wide business needs.
The enterprise applications environment with a mix of applications operates efficiently experiencing occasional dramatic CPU application requirement spikes. These application CPU requirement spikes manifest themselves throughout the day when CICS program errors occur and dumps are created. These dumps cause the system to pause and dump the transaction situation. This occurs too frequently; almost once every 15 minutes in the production CICS region. Busy business periods of multiple concurrent transactions with a large memory footprint also show stress within the systems.
Work Load Manager
The architecture of the system and its performance are controlled through a variety of software with Work Load Manager (WLM) playing a central role in overall system performance.WLM controls CPU and provides priorities of the different subsystems, online workload and batch processes.
Analysis of the WLM settings needed to be done to determine the optimum and most efficient workload software settings and determine whether the DB2, CICS, and batch transaction have the compatible settings to maximize throughput.
Observing the system processing discovered that the workflow accomplished is fluctuating when the systems has errors or dumps occurring in the various CICS regions. These dumps against the system workflow showed that the system CPU peaked and workflow was severely impacted.
When an on-line program error or dump occurs its core dump documentation and resolution are the highest priority within the system stopping or pausing all other work. An example of the problem occurred by 10:30 a.m. on a summer day. Five regions had 27 errors/dumps occur by that time, which is one every four minutes (27/150 minutes) during the production work day. Industry standards typically have a very small number of these errors or dumps occur in their production regions. This problem directly related to the application quality assurance testing and this situation will only continue to degrade the overall workflow and overall performance of the systems.
CICS Region Configuration and Allocations
The architecture of the CICS systems and the on-line programs reflects how additional data and capabilities have been added. New CICS regions and databases have been added to the workload as additional systems were added to the workload and additional features added to the applications.
These workloads were each separated into their own regions. To improve the overall workflow and provide further room to efficiently grow the CICS transaction workload a Sysplex architecture could be considered. The CICS Sysplex architecture separates the workload out to terminal owning regions (TOR), application owning regions (AOR) and data owning regions (DOR) that can be finely tuned to service each specific type of workload. These regions work together to spread and balance the peak transaction workloads.
All of these architecture, system, database, application and SQL considerations provide the opportunity for CPU cost reductions. These cost reductions could be achieved through system tuning, database design analysis, application SQL documentation and application coding standards and reviews. Implementing these has the potential of saving tremendous CPU capacity and delaying a CPU upgrade.
- Analyze the number of abends, deadlocks and the number of dumps within different parts of your applications. These deadlocks and dumps take a tremendous amount of CPU resources at critical times within your system.
- Make sure that your Work Load Manger (WLM) is set up properly to distribute the CPU resources adequately and properly to the various database systems and applications. Having the database at the same or below the applications can cause performance and overall throughput problems.
- Validate the settings between your CICS transaction configurations. Make sure the maximum workload from one TOR, AOR or DOR is not overwhelming another CICS processing partner.
Dave Beulke is an internationally recognized DB2 consultant, DB2 trainer and education instructor. Dave helps his clients improve their strategic direction, dramatically improve DB2 performance and reduce their CPU demand saving millions in their systems, databases and application areas within their mainframe, UNIX and Windows environments.
Designing the unit-of-work for a given transaction entails many components. Different techniques and methods are incorporated depending on the components such as Hibernate, iBatis, JPA or Enterprise Bean technology to process the transaction. The Java transaction framework and the object patterns incorporated with the components also affect the transaction unit-of-work. All these factors together provide complete flexibility for today's Java developers.
Unfortunately, with this flexibility comes the responsibility to handle the transaction as effectively and efficiently as possible. For example, the various Hibernate, iBatis, JPA or Enterprise Bean technologies often shield the programmer from the database access. The database object is often passed through several methods or classes before it is used so a number of modifications could have already taken place. This same database object is also the same database SQL table access used for many different of processes and this is typical for the majority of the DB2 Java system reviews done recently.
This same database object is usually a SQL access that is usually a generic SQL call to a single table retrieving all of its columns with minimal WHERE clause filtering to obtain DB2 Java data. Alternatively, the DB2 Java SQL call could be using a unique key to get a single row from the table. In both cases the SQL is fairly simple. When it only has minor WHERE clause filtering, the database access is too generic. When the access is a unique key, the access is usually too fine-tuned to retrieve the group of data desired. Sometimes the DB2 Java processing passes multiple instances of the database unique access object and the Java method processes all of these database objects.
In all these scenarios, the SQL database access within the DB2 Java application does not fit the transaction processing or its unit-of-work. Generic access through the various Java persistence layers usually only provides basic performance for your transaction processing and usually retrieves too many rows for a given transaction. To achieve peak performance the DB2 Java transaction needs to access specific sets of database information and process them quickly. In too many DB2 Java systems this is a rare transaction processing situation.
As I have talked about previously, DB2 Version 9 for z/OS introduced over 50+ great new performance features. Many of these features can really benefit your standard DBA operations, improve application performance and increase overall availability of your systems and data.
One of the important new changes in DB2 Version 9 for z/OS is Utilities options and performance improvements which helps your overall maintenance and health of your systems.
Several performance improvements were made within these standard DB2 utilities leveraging improved access and the new big block I/O features within z/OS.
These improvements result in the potential to save a tremendous amount of CPU on many of your daily, weekly and monthly utility jobs.
Analysis and performancefigures published by IBM shows that the following CPU performance improvement potential.
0 to 15% Copy Tablespace
5 to 20% in Recover index, Rebuild Index, and Reorg Tablespace/Partition
5 to 30% in Load
20 to 60% in Check Index
35% in Load Partition
30 to 50% in Runstats Index
40 to 50% in Reorg Index
Up to 70% in Load Replace Partition with NPIs and dummy input
Elimination of BUILD2 Phase
One of the most important items related to the DB2 Version 9 for z/OS Utilities Improvements was the elimination of the BUILD2 phase processing within the standard utilities when rebuilding indexes from the data entries. The BUILD2 phase was the time when DB2 data was unavailable. Eliminating this phase helps availability for all your applications and shorter utility related downtime.
Eliminating the BUILD2 phase provides a dramatic improvement in availability, but could increase the CPU time and the elapsed time because DB2 manages the concurrent applications when one or more partitions are reorganized. Eliminating the BUILD2 phase allowed online reorgs during non-peak processing and provides the DBAs with more opportunity to maintain and improve their systems, database and application performance.
With each new release of DB2, IBM makes important improvements in performance and CPU reduction, like those noted in this recent blog post.
Another situation when a DB2 Java transaction runs into problems is when it must check something outside of the critical transaction path or its normal activity. For example, when a DB2 Java transaction uses seven different discrete web services to accomplish a complete transaction unit-of-work and after the third web service runs into a situation where something else needs to be checked. The processing then tries to resolve the situation by accessing another service and the new service experiences an error exception. In most DB2 Java database transaction environments the previous three services’ work would be rolled back and the entire transaction would need to be restarted.
Within good DB2 Java processing designs the extra situation checking would be moved out of the standard flow and services transaction processing. Starting another unit of work is understandable for these double checking situations only because we want to retain the integrity of the first group of services activity already completed. Analysis needs to be completed to determine the number of times the exception processing is needed and how many times it errors out with an exception.
One of the new features within DB2 Version 9.7, Cobra, is called autonomous transactions. This allows a transaction to commit a block of statements independent of an invoking transaction. This invoked autonomous transaction, implies that the work done is committed even if the invoking transaction itself is rolled back. This feature is perfect for this type of exception processing within DB2 Java applications and can be easily implemented within a web service structure.
Given the object model of Java and the relational model of the DB2 database, accessing data properly continues to be difficult for most DB2 Java application developers. Over the history of DB2 Java development, there have been many attempts within vendor products, interfaces and open source projects to bridge this object to relational data chasm. Many object to relational mapping of (ORM) solutions exist but few applications leverage them properly or efficiently.
The leading architectures providing good performance for my DB2 Java clients today, Enterprise Java Bean (EJB) specifications and Java Persistence Architecture (JPA) continue to evolve. Sometimes I have seen the open source Hibernate product implemented by DB2 Java projects looking for a quick database interface, but usually it is leveraged improperly and performs poorly. Sometimes the Hibernate interface even masks or creates problems in DB2 Java systems with its SQL handling and various parameter settings. Some clients have even written their own plain old Java object (POJO) interface to get to their DB2 data.
Any of these ways to get to the data can work and get good performance if the proper application framework, architecture and design pattern is matched with the correct application and transaction type. Working with IMS, IDMS, CICS and MQ systems referencing DB2 shows that very large databases with high availability have a variety of frameworks, architectures and design patterns. Just because an application is using an object oriented language such as Java, C## or .Net does not mean that performance or good database standards and principles should not be implemented or expected. Sometimes the application focuses on making the application fit the framework and more attention should be paid to DB2 Java performance.
Jeff Jonas' keynote session at IDUG Europe 2010 brought up several interesting thoughts and ideas. The sessions and conversations started and it seemed that Java, Hibernate and .Net systems have started to cause DB2 Java performance problems for a large number of companies. Many great hallway conversations pointed out how we all have great standards, code review, and EXPLAIN processes within our COBOL infrastructure, but have nothing within these other development environments, including DB2 Java. This is common and I always help clients with their DB2 Java performance by using Optimization Service Center, Visual Explain, the Optim Data Studio and Query Tuner products. All of these are great to quickly improve their DB2 Java, Hibernate and sometimes even .Net systems.
Java, Hibernate and .Net Projects
Several people wanted to hear about my experience with fixing DB2 Java performance, working with the Optim Data Studio products and how they can help with DB2 Java, Hibernate and .Net projects. We talked about how easy the SQL can get uncovered and then changed from dynamic JDBC Java processing to static SQL with the new IBM pureQuery product. For several companies their storage constrained DB2 systems can really use the reduction in the dynamic statement cache by getting these DB2 Java performance problems defined to be static applications. In addition, the bonus of getting a CPU reduction from Java, Hibernate and other JDBC connected applications from being static applications and not having to double check security, object existence and access plans is a huge business selling point for getting pureQuery implemented as soon as possible.
Be sure to join us at IDUG 2011 in Prague where I'll be presenting "DB2 10 Temporal Database Designs for Performance” on November 14th.
DB2 Java performance is often a problem because the application processing is emulating the database which executes more efficiently or the processing is poorly designed. Either of these scenarios that my teams have found during performance or design reviews of DB2 Java performance of systems and applications always led to extended I/O activities and excessive CPU usage.
Too often, when the DB2 Java application was designed, the full scope of the eventual implemented processing was unknown. The specifications or even the coding of the backbone framework processing began before everything was known or so many additional processing add-ons were bolted on the transaction that the original design no longer fits the transaction and it no longer performs well. When additional functions are added into the transaction scope many times, the additional data retrievals are not added into the existing SQL processing. Instead they are coded as additional stand-alone SQL calls. This leads to SQL statement after SQL statement being executed during the single transaction of a DB2 Java application.
These add-on transaction functions typically add additional SQL to the transaction unit-of-work. This leads to the DB2 Java system transactions that seem to need hundreds or even thousands of SQL database calls to process their transaction from beginning to end. These large numbers of SQL calls usually touch and lock a large number of tables, inspect the data and finally perform the transaction processing. This situation typically uses excessive CPU and performs a large number of unneeded I/Os.
By not combining or enhancing the existing SQL to retrieve the additional data, the overall number of calls continues to expand and the DB2 Java database performance continues to suffer. The application design is needlessly neglected when these new requirements come along. When the changing requirements result in additional SQL calls with the application itself evaluating or combining new SQL data with an existing object data store, the result is more CPU usage and poor response time.
To avoid these types of situations in your DB2 Java application, understand all the data that is needed by your transaction. The application processing that combines or reevaluates data needs to be pushed back into the existing database SQL statements. DB2 does it much more efficiently. Retrieving additional data is bad for I/O, CPU, locking, and overall performance. So next time your DB2 Java transaction needs additional functionality don’t just add on, integrate your new functions into the existing SQL and designs of your DB2 Java application database processes.
In previous blog entries I have talked about transaction scope, how DB2 Java applications access the database too much and transaction units of work (UOWs) are not really analyzed properly .
Too often these days the design and development of DB2 Java applications are done in an Agile or SCRUM type of project methodology where short concise project deliverables are designed to deliver working transactions. These methodologies are good for transactions but sometime are not good at overall DB2 Java performance. Since the scope of the Agile or SCRUM sessions are individual transactions, the big picture of the overall business and processing objectives sometimes gets lost. This leads to transactions that only accomplish a small discrete piece of the business. Other transactions are necessary and retrieve the same master customer or product information again and again in order to complete the processing activity.
Database caching can mitigate and shield the impact on performance for repeatedly getting the same database information but cannot cache all the activity. When analyzing your various transactions, determine the overall business objectives and flow of your DB2 Java application. Combine standalone transactions or SOA services that use the same data keys as much as possible.
Previously we talked about the first alphabetic group of DB2 data warehouse DSNZPARMS that can improve your access paths and overall application performance. This week the second set of DSNZPARMS are discussed. Many of the data warehouse DSNZPARMS discussed are somewhat hidden within the regular DSNZPARM install panels. All of these DSNZPARMS discussed are available in DB2 for z/OS DB2 Version 9. Some are available in DB2 Version 8 or DB2 Version 10.
Caution needs to be taken with all system settings and especially these data warehouse DSNZPARMS. These DSNZPARMS are meant to change access paths and improve them, but each data warehouse design is unique along with each application access path, so results will vary. If the data warehouse DB2 subsystem is shared with other OLTP or operational applications, I highly recommend fully documenting and setting up a full PLAN STABLITIY plan and package management structure for your current access paths before changing any DSNZPARMS. This documentation along with a good PLAN STABILITY DB2 plan and package management implementation and back out practices helps your environment quickly react and back out any detrimental access paths encountered through unexpected rebind of any program.
Some of the comments I’ve received regarding this issue highlighted the resurgent of data warehousing on the z/OS platform and why running a data warehouse on z/OS provides many advantages over other platforms. One that was noted from several people is when your data warehouse runs on z/OS, the huge ETL processes usually don’t have to transmit the data over a network. Even though the network bandwidth is robust, avoiding this extra bottleneck can sometimes save hours of extra overhead, guaranteeing that your refresh data jobs have enough time every day to provide critical refreshes of you data within your data warehouse.
Additionally most of your source data warehouse data comes from the z/OS operational systems and can quickly be put into operational business intelligence data warehouses. This fresh data increases sales, provides real time inventory or product availability updates and, the most important factor, removes latency for all your critical single point master data source of record for the enterprise.
Improve your system and application performance by adjusting these data warehouse DSNZPARMS to improve your access paths and by using the superior DB2 optimizer technology and most efficient performance available.
One of the first standards and principles neglected in the DB2 Java applications that I have seen is that the application references the database too many times to complete a single transaction. While it is good to use your ORM database interface, the architect and application programmer should know how many times the ORM layer is used during each different type of the transaction. Since these ORM frameworks mask the database as just another object, many programmers do not know when their Java class or web service is firing a SQL call to the database. Within some DB2 Java performance problem systems, I have seen several hundred DB2 Java application calls to the database to complete a single transaction. This level of activity will never provide sub-second DB2 Java application transaction performance.
Comparing these new DB2 Java application database call levels against the other applications within any environment usually shows a substantial increase in overall usage. Sometimes the legacy transactions are only referencing the database 10-25 times while the new DB2 Java applications are referencing the database 130-175 times to complete a single transaction. Database usage during the Agile project development process or new scrum scenarios needs must be highlighted so everyone understands the overall performance requirements and expectations.
Most object oriented applications have their database calls travel through the network, web server and application servers making performance monitoring and evaluation even more difficult. Even though the next buzz word of cloud computing is supposed to cache and make magic of all these transaction performance problems, not even a cloud can make hundreds of calls to the database perform with sub-second response time.
The zEnterprise IBM computer was announced in July 2010 and it can run any workload. Yes any workload! Your company can now run mainframe, UNIX and some Windows applications within the new environment. The new zEnterprise is designed with performance and capacity for large-scale consolidation of any workload. As has been done recently with consolidating UNIX systems into the mainframe environment, it’s now possible to consolidate the myriad of Windows based systems with the integration of z/VM 6.1. This new computing architecture helps consolidate performance, integrate data better and improve overall management of the computing environment with unified standard performance security and optimization facilities for better overall availability.
The zEnterprise takes consolidation one step further as any workload can be implemented within its standard environment with dramatic energy savings using workload optimizers through zEnterprise’s integration of the IBMPOWER7 and IBM System x blades, allowing the consolidation of diverse application workloads.
This helps deliver the mainframe’s reliability, availability and security to these other platforms while helping your company lower risk, overall computing and energy costs. The new architecture also helps centralize all the data from these diverse platforms to a central hub, eliminating the many copies and remote islands of data. This improves integration, latency and performance as heterogeneous architectures perform more end-to-end enterprise transactions.
Consolidated computing through the new zEnterprise system is just beginning. Now all architectures, systems and applications can truly be evaluated for their performance. Regardless of the operating system, programming language or network protocol, your company can get it all done with the lowest total cost of ownership with 96 of the potentially newest and fastest CPUs (5.2GHz) available on any computer in the world through the new zEnterprise system.
Check out all the zEnterprise information at the links below and the interesting comments about z/VM from PCWorld. Then imagine all your enterprise data integrated into a high performance, energy efficient and optimized system.ibm.com/systems/zenterprise/