Application design
When you create an application, you can improve performance in several ways. For example, consider using compound SQL and stored procedures, grouping related database requests into one database request, refining the predicate logic, implementing data blocking and tuning your dynamic SQL. This section is also relevant for applications using embedded SQL.
- Compound SQL and stored procedures
-
For applications that send and receive many commands and replies, network processing usage can be significant. Compound SQL and stored procedures are two ways to reduce this processing usage.
If an application sends several SQL statements without intervening programming logic, you can use compound SQL. If you require programming logic within the group of SQL statements, you can use stored procedures.
All executable statements except the following statements can be contained within a Compound SQL statement:CALL FETCH CLOSE OPEN Compound SQL Connect Prepare Release Describe Rollback Disconnect Set connection execute immediate
Stored procedures help to reduce network traffic by placing program logic at the server. You can commit automatically when exiting the procedure. You can also return results sets, which minimize application logic at the client.
- Grouping requests
-
Grouping related database requests (SQL statements) into one database request can reduce the number of requests and responses transmitted across the network.
For example, grouping the following statements:
intoSELECT COL1, COL2, COL5, COL6 FROM TABLEA WHERE ROW_ID=1 SELECT COL1, COL2, COL5, COL6 FROM TABLEA WHERE ROW_ID=2
sends fewer requests across the network.SELECT COL1, COL2, COL5, COL6 FROM TABLEA WHERE ROW_ID=1 OR ROW_ID=2
You can also use keywords such as IN and BETWEEN to reduce the number of rows returned. In addition, you can use WHERE, IN, and BETWEEN keywords on UPDATE and DELETE statements.
- Predicate logic
-
You can use predicate logic to request only the rows and columns that are needed. This minimizes the network traffic and CPU usage for data transmission.
For example, do not use the query:
if only the first row ofSELECT * FROM TABLEA
TABLEA
withROW_ID
=1 is really needed or if only column 1 and column 2 are needed. - Data blocking
-
You should use data blocking if you expect large amounts of data from the server. Blocking improves the use of the network bandwidth and reduces the CPU usage of both the IBM® mainframe database server and the Db2 Connect server. There is fixed amount of CPU and network usage for each message sent and received regardless of size. Data blocking reduces the number of messages required for the same amount of data transfer.
With blocking, the first row of data from a query will not be delivered to the application until the first block is received. Blocking increases the retrieval time for the first row, but improves the retrieval time for subsequent rows.
Another consideration is the amount of memory that is used. The memory working set usually increases when blocking is turned on.
Within Db2 Connect, you can control the amount of data that is transferred within each block.
To invoke blocking, use the BLOCKING option of the prep or bind command. Blocking is on, if:- The cursor is read-only, or
- The cursor is ambiguous and blocking is specified during the prep or bind.
Note: When using dynamic SQL, the cursor is always ambiguous. - SQL statements with BLOCKING
Updatable SELECT statements (using
UPDATE/DELETE WHERE CURRENT OF
statements) are non-blocking queries, so you should use them only when absolutely necessary.An updatable SELECT ensures that the row has not changed between the time the SELECT is completed and the
UPDATE/DELETE
is issued. If this level of concurrency is not important to your application, an alternative is to use a DELETE or UPDATE with search criteria based on the values returned from a non-updateable SELECT.For read-only SELECT, specify
FOR FETCH ONLY
, except under VM and VSE, where it is not supported.- Static and dynamic SQL
- Use static SQL as much as possible. It avoids runtime SQL section preparation and ambiguous cursors. If dynamic SQL cannot be avoided, you can do the following to minimize the network traffic and improve performance:
- If the statement is a SELECT and must be prepared, perform
PREPARE ... INTO
SQLDA. The SQLDA should be allocated to the full size needed for your settings. If the maximum number of columns is x and is expected to stay that way, allocate an SQLDA with x SQLVARs. If the number of potential columns is uncertain (and memory is not a problem), use the maximum number of SQLVARs (256).If the SQLDA allocation is not big enough to store the returning SQLDA, the program must issue another DESCRIBE with a big enough SQLDA to store the result again. This would increase the network traffic.
Do not use the PREPARE and DESCRIBE sequence. Using the
PREPARE.....INTO
statement provides better performance. - Execute statically bound SQL COMMIT or ROLLBACK statements instead of dynamic COMMIT or ROLLBACK statements.
- If it is not a SELECT, COMMIT, or ROLLBACK statement, issue EXECUTE IMMEDIATE to execute the statement instead of the PREPARE and EXECUTE sequence.
- ODBC applications use dynamic SQL. You might use the CLI/ODBC static profiling feature to improve performance. This feature allows you to capture and convert ODBC calls into static statements stored in a database package. The actual performance you will get depends on the complexity of your application.
- If the statement is a SELECT and must be prepared, perform
- Other SQL considerations
-
Using the Command Line Processor (CLP) is, in general, slower than having dynamic SQL in the program because the CLP must parse the input before submitting the SQL to the database engine. The CLP also formats data when it is received, which might not be necessary for your application.
SQL statements in an interpreted language, such as REXX, are substantially slower than the same SQL statements in a compiled language, such as C.
There are two types of CONNECT statement, called type 1 and type 2. With type 2 connect, connecting to a database puts the previous connection into a dormant state but does not drop it. If you later switch to a dormant connection, you avoid the processing usage of loading libraries and setting up internal data structures. For this reason, using type 2 connect might improve performance for applications that access more than one database.