Keeping ahead of the competition is a top priority in today's ever-changing business environment. From an IT and database perspective, this means that you need to be able to access and present business data in new, insightful ways to your user, and deliver these capabilities yesterday!
Ease of use is a key aspect of the integrated database IBM DB2 for i, which has attracted and spoiled System i and AS/400 customers for a long time. As the complexities and requirements of IBM System i applications have increased in recent years, IBM has significantly enhanced the SQL functionality and performance of DB2 for i, while keeping it easy to use and manage. As you will discover in this article, the V6R1 release delivers another round of sophisticated SQL features, paired with self-managing and self-learning features.
Report creation and development is simplified in V6R1 with the support for
Grouping Sets and the
ROLLUP OLAP expressions, which enable you to aggregate
data by different dimensions with a single SQL statement. For instance, these new
features allow a single query to return sales data to the user and allow those
sales results to be viewed at a store, region, or country level. Listing 1
provides an example of using the new grouping set and super group syntax to return
data from business orders in several perspectives:
SELECT OrderYear, OrderQuarter, OrderMonth, SUM(OrderAmount) FROM orders GROUP BY GROUPING SETS( CUBE(OrderYear,OrderQuarter,OrderMonth), ROLLUP(OrderMonth, OrderNo), (DiscCode), (CustomerNo))
SELECT FROM INSERT support is another enhancement
that allows you to accomplish more processing on a single SQL statement. This
feature is a big help to those developers who take advantage of DB2's ability to
automatically generate key values with the identity column attribute. One
challenge with identity columns is accessing the key value (for example, part
number) that DB2 generated since it requires an additional SQL statement to be
INSERT statement is executed to add the
new row, followed by an extra
SELECT statement with the
Identity_Val_Local function to retrieve the generated
value. This two-statement solution is also problematic when a block of rows are
inserted with a blocked
INSERT because the
Identity_Val_Local function only returns the generated
key value for the last row inserted.
With this new syntax in DB2 for i 6.1, the
statement on the
FROM clause is executed first, and
then the outer
SELECT statement returns the values that
DB2 generated for the part_id identity column. Notice that the
FINAL TABLE keyword is required when referencing an
INSERT statement on the
clause. The second
FROM example in Listing 1
demonstrates how this new support can be used to access the generated identity
values for a blocked insert operation. The
ORDER BY INPUT SEQUENCE clause allows you to get the
generated values back for the part_id column in the order in which they were
generated by DB2. Not only does this new
FROM syntax solve the issue of retrieving generated
values for blocked inserts, but it also allows applications to receive a small
performance boost from a single DB2 request that executes multiple database
FROM INSERT example
CREATE TABLE parts (part_id INTEGER AS IDENTITY, part_type CHAR(10), part_qty INTEGER, part_desc CHAR(6)); SELECT part_id FROM FINAL TABLE ( INSERT INTO orders VALUES(DEFAULT, 'Widget1' , 5, 'BIG') ); SELECT part_id FROM FINAL_TABLE ( INSERT INTO orders VALUES(DEFAULT, 'Widget2', 120, 'SMALL') , (DEFAULT, 'Widget3', 2, 'MEDIUM')) ORDER BY INPUT SEQUENCE;
The ability to specify a
VALUES clause on a
FROM clause also makes it easier for developers
creating queries that need to reference data from in-memory tables or data
structures in a result set. In the example in Listing 3,
rate data maintained by the application is referenced in a join operation without
first having to go through the overhead of creating and populating a DB2 temporary
FROM support provides IBM i developers with more
flexibility when constructing queries and reports.
WITH proposedRates(prType, prRate) AS ( VALUES(?,?),(?,?),(?,?),(?,?) ) SELECT rmtype, prRate, ((prRate - rmRate)/rmRate)*100 AS RateChgPercent FROM rooms, proposedRates WHERE rmtype = prType ORDER BY RateChgPercent DESC
The toolbox of built-in SQL functions was also expanded in V6R1 to make life
simpler for IBM i developers using SQL. Data privacy laws and policies continue to
make the encryption of data a popular topic, so DB2 for i has added support for
the popular AES encryption algorithm with the delivery of the
ENCRYPT_AES function. Those IBM i applications dealing
with date and time values will also benefit from the enhancements added to the
TIMESTAMP_FORMAT functions as well as the following new
SKIP LOCKED DATA clause is a great addition for
those developers looking to increase throughput of applications supporting a large
number of concurrent users. One common misconception about DB2 for i locking is
that if you're running a query against a DB2 table that other jobs and users are
changing, that DB2 will automatically skip over any row changes that have not yet
been committed. Actually, the opposite occurs -- if a row matches the search
criteria of the query, then DB2 stops and waits to see if it can acquire the lock
on that row. Let's review a simple example using the Cursor Stability (CS)
isolation level to make this behavior easier to understand.
One night you're using the internet to search for possible flights to Fiji. Two seconds before you submit your search on the travel agency's website, the following update is performed on their flight database to change the departure time from 4:30pm to 5:25pm as part of a large batch of updates to their flight schedules:
UPDATE flights SET departTime='05:25' WHERE departTime = '4:30' AND flightNum=331 AND destCity='NAN'
Your flight search request issues the following
statement to find a flight that leaves after 5pm for Fiji. When the
SELECT statement processes the row that was just
SELECT statement finds a match because the
newly updated departure time of 5:25pm meets the specified selection criteria.
SELECT * FROM flights WHERE departTime >= '05:00' AND destCity='NAN'
Since DB2 does not know that the row has been locked, DB2 for i attempts to
acquire a read lock on this row as dictated by the CS isolation level. The
UPDATE statement has not yet been committed since it's
part of the batch update, meaning that the update lock on the row still exists.
Therefore, the update lock conflicts with the requested read lock, so the
SELECT statement will now stop and wait until the row
lock is released. If the batch update process in this example is long-running, the
SELECT statement may fail with a lock timeout error
(the default record wait time is 60 seconds on IBM i). The fact that the
SELECT statement stops and waits for an update lock on
the row to be released is a big surprise to many developers who believe that DB2
will just skip over the locked row and search for other rows in the table meeting
the selection criteria. This may be good or bad news depending on the requirements
of your application.
With the support in DB2 for i 6.1, developers have their choice of behaviors.
They can continue to have applications run with the default behavior of DB2
waiting for locks or have DB2 skip over locked rows. The skipping behavior is
activated with the
SKIP LOCKED DATA clause, as shown in
the following listing:
SELECT * FROM flights WHERE departTime >= '05:15' AND destCity='NAN' SKIP LOCKED DATA
SELECT statement will now skip over any rows
where it encounters a lock conflict, instead of waiting to acquire the row lock.
SKIP LOCKED DATA clause can also be specified on
statements. This new clause is only honored by DB2 for i with the Cursor Stability
and Read Stability isolation levels.
Extended indicators provide
UPDATE statements with
another new capability in V6R1. Extended indicators enable developers to code a
UPDATE statement instead of coding a
UPDATE statement for each combination of
columns that need to be updated. This statement reusability is demonstrated in the
coding example found in Listing 4. Notice the
WITH EXTENDED INDICATORS clause on the cursor
declaration. This clause allows extended indicator variables to be passed on
positioned update operations associated with the referenced cursor (in other
words, cur1). Extended indicators are then set to a defined set of special values
to tell the
UPDATE statement whether or not the column
should be included in the update operation. In this example, the
inds2 extended indicator variable is set to a value of
-7, which tells DB2 to omit status column from the
positioned update. Thus, only two columns (shipdate and quantity) are actually
changed by the embedded
UPDATE statement even though
three columns are referenced on the
INSERT statements are also able to utilize extended
indicators in V6R1.
Listing 4. Extended indicator example
EXEC SQL DECLARE cur1 CURSOR WITH EXTENDED INDICATORS FOR SELECT order_id,shipdate,quantity,status FROM orders; EXEC SQL OPEN cur1; EXEC SQL FETCH cur1 INTO :orddat:inds; ... inds3 = -7; EXEC SQL UPDATE orders SET shipdate = :nsdate:inds1, quantity = :nqty:inds2, status = :nstat:inds3 WHERE CURRENT OF cur1; ...
Many IBM i developers execute SQL scripts from a CL program or command line
interface using the Run SQL Statements (
command. The 6.1 version of the
RUNSQLSTM command now
has the ability to process and execute SQL statements stored in a stream file. In
previous releases, the SQL had to be stored in an IBM i source physical file
member. This new stream file support can benefit developers in several different
ways. Most importantly, your SQL statements no longer have to be limited to 80
characters in length or less when stored in a stream file. This 80-character limit
for statements stored in a source file member made SQL coding quite difficult and
tedious for developers. In addition, a stream file can accommodate larger, more
complex SQL scripts since they have a size limit of 1TB, which is dramatically
bigger than the 16MB limit of a source file member. Those developers using a
stream-file-based change management system can now easily execute SQL scripts
stored in that type of change management system.
Support for source stream files is also available with the ILE SQL pre-compilers in V6R1. In addition, IBM continued the momentum of improved SQL and RPG integration from previous releases with additional enhancements to the ILE RPG SQL pre-compiler. One of the key improvement areas in V6R1 is the pre-compiler's ability to scope variables at a procedure level. This new capability allows the SQL pre-compiler to properly process and execute code similar to the RPG code in Listing 5:
Listing 5. RPG SQL variable scoping example
PSubProc1 B EXPORT D PI D MyArray ds qualified dim(1000) D customer 25A D region 25A /free EXEC SQL DECLARE c1 CURSOR FOR SELECT customer, region FROM cust_dim; EXEC SQL OPEN c1; EXEC SQL FETCH NEXT FROM c1 FOR 100 ROWS INTO :MyArray; EXEC SQL CLOSE c1; return; /end-free P E PSubProc2 B EXPORT D PI D MyArray ds qualified dim(1000) D part 55A D mfgr 25A D brand 10A /free EXEC SQL DECLARE c2 CURSOR FOR SELECT part, mfgr, brand FROM part_dim; EXEC SQL OPEN c2; EXEC SQL FETCH NEXT FROM c2 FOR 200 ROWS INTO :MyArray; EXEC SQL CLOSE c2; return; /end-free P E
The same variable name,
MyArray, is used in both RPG
procedures in this example to declare data structures with slightly different
attributes. The first procedure declares the MyArray data structure with two
fields, and the second procedure contains three fields within the
MyArray declaration. In addition, each procedure
contains an SQL
FETCH statement referencing the
duplicated variable name. Previously, the SQL RPG pre-compiler was unable to scope
the variable name to the procedure, which resulted in headaches for RPG
developers. The result of this deficiency was that the SQL RPG pre-compiler would
either fail with an error, flagging the duplicate variables names as illegal (even
though it's valid RPG syntax), or unpredictable results would occur at runtime
because the SQL RPG pre-compiler was incorrectly sharing one definition of the
duplicated variable across multiple procedures. The V6R1 SQL RPG pre-compiler
eliminates this hurdle by correctly scoping the variables at a procedure level.
Later this year, IBM plans on providing a PTF to make this variable-scoping
enhancement also available on V5R4. Here is a listing of some other V6R1
enhancements to the RPG SQL pre-compiler:
- Ability to define variables based on
- Improved integration with IBM application tools for source code error resolution
Not to be overlooked, COBOL programmers will also benefit from the Unicode support added to the ILE COBOL SQL pre-compiler.
IBM i development shops coding with industry-standard data access methods such as JDBC, SQL Call Level Interface (CLI) and ADO.NET will also benefit from the V6R1 enhancements. Java™ data access is improved with the JDBC Version 4.0 enhancements along with dramatic performance improvements in V6R1. The DB2 for i .NET data provider has been enhanced to exploit Version 2.0 of ADO.NET as well as provide support for distributed transactions and multi-row inserts.
Software vendors continue to port their solutions to IBM i to expand the market reach of their applications and to satisfy demand from existing System i customers wanting the latest applications and tools to run natively IBM i. To reduce the effort required in these porting projects, IBM has enhanced the SQL functionality of DB2 for i with V6R1.
Porting Microsoft SQL Server and MySQL tables with automatic or hidden timestamp columns to DB2 for i was a daunting challenge prior to V6R1. When a table in one of these database products includes a timestamp column, the database engine automatically updates that timestamp column each time a row is inserted or updated. This automatic update behavior provides applications an easy mechanism to determine if a table row that was read earlier has changed before they attempt to update that same row in the table. It's simple to identify which row has changed by comparing the value of the timestamp column when the row was read with the current value of the timestamp column in that same row of data. This timestamp comparison is central to applications that use an optimistic locking approach and can now be easily implemented with the DB2 for i hidden timestamp column support available in V6R1.
Let's look at the example in Listing 6, below, to better understand how a hidden timestamp column is defined and how it performs during normal I/O to the table:
Listing 6. Automatic, hidden timestamp column example
CREATE TABLE tickets(ticket_ord INTEGER, ticket_qty INTEGER, ticket_event VARCHAR(10), ticket_ts TIMESTAMP NOT NULL IMPLICITLY HIDDEN FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP); INSERT INTO tickets VALUES(1,11,'mvGAME1'), (2,8,'ihGAME4'); SELECT * FROM tickets; UPDATE tickets SET ticket_qty = 6 WHERE ticket_ord = 2; SELECT ticket_ord, ticket_qty, ticket_event, ticket_ts FROM tickets;
The new syntax for hiding a column is specified on the
CREATE TABLE statement with the
HIDDEN clause on the ticket_ts column
HIDDEN clause indicates the column
will not be visible on any SQL statement unless it's explicitly referenced by
name. This behavior is illustrated in Figure 1, which contains the output for the
SELECT * FROM tickets) in
Listing 6. The ticket_ts column is excluded from
the result set since that column was not explicitly referenced on the
Figure 1. Hidden timestamp SELECT * FROM tickets output
FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP
clause results in DB2 changing the ticket_ts column value each time that a
row is inserted or updated into the table. The effect of this clause is displayed
in Figure 2. Figure 2 contains the result of the final
SELECT statement in Listing 5,
which explicitly includes the ticket_ts column along with the unhidden
columns in the Tickets table. Observe the values in the ticket_ts
column to see how DB2 automatically maintained the values for this column
statements in Listing 5, even though the statements did
not include any references to the ticket_ts column.
Figure 2. Hidden timestamp SELECT all columns output
New data types in 6.1 also make table definitions more portable to DB2 for i. The
National Character data types (
a standard way of defining a Unicode column with the UTF-16 encoding. The Decimal
Float data type,
DECFLOAT, is a new numeric data type
added for compatibility with the other DB2 products that combines the attributes
of decimal and float with extended accuracy.
Porting databases and applications from DB2 for z/OS® and DB2 for Linux®, UNIX®, and Windows® products to DB2 for i is also simplified in V6R1. DB2 for i 6.1 enhancements for DB2 family portability include Full Outer Join syntax support and a new feature allowing DB2 for i to ignore unsupported syntax. The SQL scripts for other DB2 databases often contain low-level configuration syntax (for example, tablespace options) on the SQL data definition language statements that prevented the scripts from executing on DB2 for i until the unsupported syntax was removed. To eliminate this tedious work, DB2 for i 6.1 now just ignores this type of syntax and options that will never be supported on DB2 for i. This is due to the fact that the DB2 for i database engine and operating system automatically handle many low-level database administration tasks that require manual configuration with the other DB2 products. Listing 7 contains examples of statements with the ignored syntax highlighted in bold. An SQL warning will be returned to flag whenever syntax is ignored on DB2 for i. Porting of database creation scripts should be much easier by eliminating the step of deleting unnecessary syntax from script files.
Listing 7. Ignoring unsupported syntax
CREATE TABLESPACE tspace1 MANAGED BY DATABASE USING(device '/dev/rcont $N' 20000); CREATE TABLE new_table (c1 INTEGER PRIMARY KEY, c2 CHAR(10)) INDEX IN tspace1;
SQL CLI is frequently the data access programming interface used by DB2 applications that need to be ported to DB2 for i. In V6R1, these SQL CLI applications are also easier to port with the addition of support for the CLI Wide-Character APIs. The CLI Wide-Character functions are found in applications that support Unicode or Double-byte data.
Over the last couple of releases, the SQL Query Engine (SQE) has been one of the
IBM key initiatives for boosting SQL performance to new peaks. This performance
advancement for SQL workloads on DB2 for i continues in V6R1 with the removal of
two major roadblocks that prevented the usage of the SQE: National Language Sort
Sequences (NLSS) and functions relying on low-level translation function.
Low-level translation capability is required by commonly used SQL built-in
functions, such as Upper and Lower. The NLSS support in IBM i is frequently used
in global markets where the application requires the character data be sorted in a
manner that matches the local language, instead of the default ordering (in other
*HEX), which closely mirrors the sorting of the
English alphabet. With these roadblocks now removed, the most common items that
prevent the usage of SQE in V6R1 are:
- Select/omit and derived logical files defined on an underlying table
- Logical file references on
- Non-SQL interfaces (for example, OPNQRYF, Query/400)
Note: Since SQL is the strategic interface for DB2 for i data access and the industry standard, non-SQL interfaces are a very low priority for the SQL Query Engine. SQE will only support SQL-based interfaces for the foreseeable future.
IBM has also changed the default value for the
Ignore_Derived_Index QAQQINI configuration parameter to
enable more SQE usage in V6R1. The new default value of
*YES allows the SQL Query Engine to be used in
environments where SQL statements are referencing DB2 objects created with DDS.
This new value allows the SQE query optimizer to ignore any keyed logical files
that contain select/omit criteria or key field derivations it encounters during
optimization, instead of rerouting execution to the older Classic Query Engine
Ignore_Derived_Index QAQQINI parameter can
be manually set back to
*NO to have the SQE optimizer
behave the same as releases prior to V6R1.
The most interesting V6R1 SQE enhancements are those new features that leverage the extensible architecture of the SQL Query Engine. One of the key 6.1 enhancements is the introduction of self-learning query optimization into SQE. A self-learning query optimizer is able to analyze poor-performing query plans and dynamically adjust its internal algorithms based on feedback to select a better query plan on future executions. In V6R1, the SQE optimizer automatically analyzes poor performing query plans to examine the I/O characteristics as well as record retrieval patterns and compare them to the values used during optimization of the query. If the query optimizer detects significant mismatches, the optimizer modifies its assumptions and algorithms during the next execution of that SQL request to generate a better plan. To complement the learning optimizer, the DB2 runtime engine is also equipped with adaptive technologies that allow the plan of a currently running query to be modified on the fly to improve the efficiency of the query for the remainder of the run.
In addition, the SQE optimizer can also improve SQL performance in V6R1 with new indexing technology. This new DB2 for i indexing technology is a feature called SQL derived indexes. Long-time customers may view this enhancement as the SQL interface finally catching up with DDS by allowing SQL indexes to be created with expressions and selection criteria just as i5/OS and OS/400 developers have been doing for years on keyed logical file definitions. This view is on partly true. Listing 8 demonstrates how SQL-derived indexes take these capabilities even further with the ability to specify SQL built-in functions as part of the key expression:
Listing 8. SQL derived index example
CREATE INDEX upper_cname ON customers(UPPER(company_name))
UPPER function shown in the previous section will
most likely be one of the more popular SQL built-in functions used on derived
index definitions. Developers often use the
function to implement case-insensitive searches, such as the query shown in
9. By forcing all of the company names to
be converted to upper case, the query will return all occurrences of IBM even if
the data entry clerks entered that company name differently (for example, IBM or
Listing 9. Case-insensitive search
SELECT customer_id, customer_phone FROM customers WHERE UPPER(company_name)= 'IBM'
Performance problems could arise on past releases due to the fact that the
UPPER function prevented the DB2 for i query optimizer
from using an index to speed up this search. The new SQL-derived index support
allows the index in Listing 8 to be created and the SQE
query optimizer to use this index to quickly retrieve the customer data since the
key expression on the index exactly matches the column expression in the
Shared-weight sort sequences have been successfully used by applications to solve the performance issues associated with case-insensitive searches. However, that approach does require more complex configuration when creating your database and applications. In addition, sort sequences can't be used with more complex expressions such as:
WHERE UPPER(CONCAT(FirstName,CONCAT(' ',LastName)) = 'DALLAS
Just about any SQL built-in function can be included in the key definition of an SQL-derived index.
Listing 10 contains an example of an SQL-derived index containing selection criteria. This is known as a sparse index since the index does not contain a key value for every row in the table. While this sparse SQL index can be created on V6R1, it should be noted that the SQE query optimizer currently does not have the ability to use a sparse index in a query implementation. The SQE query optimizer can only choose SQL-derived indexes with key expressions. IBM plans on enhancing the SQE query optimizer in the future with the ability to use sparse indexes.
Listing 10. Sparse SQL derived index example
CREATE INDEX cust_ix1 ON customers(customer_id) WHERE activefld='A'
SQL workloads on IBM i should also run faster on V6R1 with the streamlining of the SQL full open code path. Full open is the processing that DB2 has to perform the first and second time an SQL statement is run within a connection (or job) -- some tests have shown a 10% performance improvement.
Applications utilizing stored procedure calls will receive a nice boost from
improved caching of repeated stored procedure calls. In addition, IBM also made
improvements in the code generation utilized for SQL procedures, functions, and
triggers. One of those improvements is the ability for DB2 for i to generate
service program objects for SQL procedures, which is accomplished by specifying
PROGRAM TYPE SUB clause on the
CREATE PROCEDURE statement. Simple SQL routines will
need to be recreated on V6R1 in order to benefit from these improvements.
Tuning database performance can be an arduous task when there are SQL statements being executed from remote clients such as browsers and ADO.NET applications. A system administrator can easily detect the IBM i server jobs that are consuming extra resources because of a long-running SQL statement. However, it's quite difficult to trace back and determine which remote client or program submitted the SQL request to DB2 for i. To address this issue, IBM has introduced the following special registers in V6R1 to allow applications to use them in an effort to simplify this problem determination exercise:
Listing 11. Client special registers
CURRENT CLIENT_ACCTNG CURRENT CLIENT_APPLNAME CURRENT CLIENT_PROGRAMID CURRENT CLIENT_USERID CURRENT CLIENT_WRKSTNNAME
Application developers can instrument their applications to set the client register values with one of the following interfaces:
Set Client Information (SQLESETI)IBM i API
Once these registers have been set by the application, administrators can retrieve these settings by collecting database monitor data or by using the System i Navigator SQL Details for a Job task by just right-clicking on the Database object in the navigation tree. Output from the System i Navigator task can be found in Figure 3. In earlier releases, this Navigator task was known as Current SQL for a Job.
Figure 3. SQL details for a job
A close review of the SQL details output in Figure 3 shows that IBM has
instrumented some of its own products and tools to assign values to these new
client special registers. The
UPDATE statement was run
by user KMILL from the System i Navigator Run SQL Scripts interface.
The usability of the SQL Performance Monitors in System i Navigator took a major step forward in V5R4, and that continues with V6R1. The SQL Monitors Analyze task makes it relatively easy to identify a set of SQL statements with performance issues. However, sharing those SQL statements with a co-worker was easier said than done. On releases prior to V6R1, the only options available were a primitive copy and paste or telling the user how to run the same analysis report. The new Save Results task in V6R1 allows the results of a report to be saved directly in a wide variety of output formats. This new interface is shown in Figure 4. This dialog not only allows the report data to be saved into a shareable format, but it also provides an option to launch the PC application (in this case, Microsoft Excel) that's associated with this file type.
Figure 4. System i Navigator save results interface
Column customization is another V6R1 enhancement that makes System i Navigator output easier to read and analyze. The SQL Performance Monitor analysis reports are very comprehensive in returning every piece of information that could be affecting DB2 performance. The downside of this comprehensive approach is that users can be overwhelmed by the numerous columns of data returned -- some reports include dozens of columns. Furthermore, the default ordering of the columns in the reports may slow your ability to find the most performance information. For example, some reports require the user to scroll two or three screens to the right to find the SQL statement text. The column customization support in V6R1 solves both of these problems by providing the ability to reorder the columns and remove columns from the output display. This customization is accomplished through the Select Columns window, as shown in Figure 5. This column customization is also available on other interfaces, such as the Plan Cache, Show Indexes, and Index Advisor. Once an interface has been customized, the reporting tools will remember those settings on future reports.
Figure 5. System i Navigator column customization
In addition to these great interface improvements, IBM has also enhanced the data
being collected by the database monitor. One example of new data collected by the
DB2 for i monitors is the new client special registers (for example,
CURRENT CLIENT_APPLNAME) that were discussed earlier.
An even bigger addition is the monitor's ability to collect detailed data for
OPNQRYF requests. With this addition, the
System i Navigator Visual Explain tool and SQL Performance Monitor analysis
reports can be used for the first time to analyze and tune the performance of
The usability of the index analysis tools in System i Navigator is also improved with V6R1. Customers love the real-time index advice provided by the system-wide Index Advisor added in V5R4. However, the interface for creating indexes based on the advice has not been as well-received. If the DB2 for i advisor suggested ten different indexes that you wanted to create from that interface, you manually had to click on each of the ten advised indexes to create the index. The V6R1 Index Advisor interface allows you to more quickly accomplish this task by simply highlighting a group of advised indexes and selecting the Show SQL task to generate all the Create Index statements with a single operation.
The Index Evaluator has been a user favorite since the tool was introduced in V5R3. This is due to the fact that this tool allows users to easily evaluate which indexes and logical files (LF) are being used by DB2 for i and which index and LF objects are just wasting disk space and system resources used to maintain them. The drawback with this tool is that indexes could only be evaluated at a table level. The Index Evaluator is accessed by executing the Show Indexes task. Whenever it was necessary to perform index evaluation for all 50 tables (physical file) in a schema (library), the user would manually have to run the Show Indexes task on each table in the collection. Fortunately, that's no longer an issue with V6R1 because the Show Indexes task can be executed at a schema level by just right-clicking on the Tables object in a schema and then selecting this task, as shown in Figure 6:
Figure 6. Show indexes schema-level
The final System i Navigator enhancement to highlight is the V6R1 modifications to the Health Center. The Health Center has been enhanced to start reporting on DB2 runtime limits, such as the number of LOB (Large Object) locators or CLI handles per connection (or job).
System i Navigator is not the only IBM tool that simplifies the day-to-day work of i5/OS developers and administrators. The IBM Rational Data Architect, Version 7, (see Resources) provides improved data modeling support for DB2 for i databases with Fix Pack 188.8.131.52, including enhancements for DDS-created databases.
Customers and developers will also benefit from the DB2 Web Query enhancements being delivered in 2008 (see Resources). The new DB2 Web Query Run Time User licensing option can significantly reduce the costs and simplify deployment for IBM i customers that have a large number of users that only need to run reports. Report scheduling and distribution will also be available to customers with the new Report Distributor option. To make it easier for developers to leverage and integrate DB2 Web Query reports into their applications, IBM is delivering a software development kit (SDK) and Web services interfaces. In addition, a new Excel plug-in enables the execution of DB2 Web Query reports from within a spreadsheet.
Stepping up simplicity
It should be clear that V6R1 lifts DB2 for i to new heights, especially when you consider that this article is just the DB2 V6R1 highlights and not a complete list. From the great additions, to SQL that enables rapid solution development, to the sizzling DB2 performance gains, the DB2 for i V6R1 enhancements provide a great foundation for managing business data in a simplified manner, allowing your company to focus on running your business instead of your DBMS. So run, don't walk, to V6R1.
- DB2 for i5/OS V6R1 overview: See a brief overview of the many enhancements that DB2 for i5/OS V6R1 delivers. Get more detail about these enhancements, as shown in the included chart.
- IBM DB2 for i online information: Browse the DB2 for i references.
- "i5/OS DBA: New Derived Key Indexes in DB2 for i5/OS V6R1" (IBM Database Magazine, February 2008): Speed data access for even complex SQL.
- IBM DB2 for i porting information: Porting to DB2 for i? Find out how.
- IBM Rational Data Architect: Find more information about Rational Data Architect, which simplifies data modeling and integration design, enabling architects to discover, model, visualize, and relate diverse and distributed data assets.
- DB2 Web Query for i: Find out more information about DB2 Web Query for i, which offers a set of modernized tools for a robust, extensible and productive reporting solution.
- DB2 for i SQL Performance Workshops: Attend this workshop to understand the science of query optimization, the art of query optimization, SQL performance techniques and considerations, and SQL performance tools and analytical methods.
Get products and technologies
- Rational Data Architect V7.0: Download a trial version of Rational Data Architect V7.0, an enterprise data modeling and integration design tool designed to help data architects design relational and federated databases, understand data assets and their relationships, and streamline database projects.
Dig deeper into Information management on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Experiment with new directions in software development.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.