Function level 500 (for migrating to Db2 13 - May 2022)

Activating function level 500 (V13R1M500) prevents coexistence with and fallback to Db2 12. Function level 500 is also the first opportunity for applications to use many of the new capabilities in Db2 13. However, new capabilities that depend on Db2 13 catalog changes remain unavailable.

Contents

Enabling APAR: None
Full identifier: V13R1M500
Catalog level required: V13R1M100
Product identifier (PRDID): DSN13011
Incompatible changes: None

New capabilities in function level 500

Function level 500 activates the following new capabilities in Db2 13.

Increased flexibility for package ownership
Starting at function level 500, you can specify the type of owner for a plan, package, or service, or the type of package owner for an SQL PL routine. The owner can be a role or an authorization ID. The default owner is a role in a trusted context that is defined with the role as object owner and qualifier attributes, otherwise the default owner is an authorization ID.

For more information, see the PACKAGE OWNER clause of CREATE PROCEDURE statement (SQL - native procedure) and the OWNERTYPE option of the OWNER bind option.

Page sampling for inline statistics
Beginning in function level 500, the REORG TABLESPACE and LOAD utilities can now use page sampling when they gather inline statistics. Page sampling has the potential to reduce both CPU time and elapsed time. In earlier Db2 releases, the RUNSTATS utility can use page sampling, but inline statistics that are gathered by other utilities can use row sampling only. To use page sampling for inline statistics with REORG TABLESPACE or LOAD, specify the TABLESAMPLE SYSTEM option or ensure that the STATPGSAMP subsystem parameter is set to SYSTEM (the default) or YES. In function level 500, STATPGSAMP is extended to apply to inline statistics. For more information, see the TABLESAMPLE SYSTEM option description in Syntax and options of the LOAD control statement and Syntax and options of the REORG TABLESPACE control statement.
SQL Data Insights
Function level 500 delivers SQL Data Insights (SQL DI), an integrated solution that brings deep learning AI capabilities into Db2. SQL DI uses unsupervised neural networks to generate a specialized vector-embedding model called database embedding, which can be referenced through SQL queries called "cognitive intelligence" queries.

The SQL DI user interface is an optional feature available at no additional charge with Db2 13, which provides the user interface for training models and exploring data insights. Db2 provides the in-database infrastructure for training and model table (vector table) management. Db2 also provides three new built-in cognitive functions to speed up query execution.

For more information, see Running AI queries with SQL Data Insights.

Reduced ECSA storage for IFI buffers

Db2 13 reduces the use of ECSA storage for IFI buffers from a maximum of 50 MB to a fixed 8 MB.

Function level 100 reduces the use of ECSA storage for IFI buffers to a maximum of 25 MB. Then, after function level 500 is first activated, it is further reduced to 8 MB. The storage behavior that is introduced in function level 500 continues even if you later activate function level 100*.

To compensate for the reduction in ECSA storage, you must set aside an extra 50 MB for HVCOMMON and 25 MB for private storage. You can reduce the ECSA storage after function level 500 is activated and Db2 starts using the new storage pools. When Db2 uses the new storage pools, the use of ECSA for the retrieval of IFI records noticeably decreases. You can monitor use of the new storage pools by starting the statistics trace to collect IFCID 225. Then, you can check the SHARED / COMMON storage summary report in the formatted IDCID 225 SMF trace record.

For more information about ECSA storage requirements, see Calculating the storage requirement for the extended common service area.

Online conversion of tables from growth-based (PBG) to range-based (PBR) partitions

Function level 500 introduces the capability to convert the partitioning scheme of a table with growth-based partitions (in a PBG table space) to use range-based partitions (in a PBR table space). The conversion can be completed as an online change with minimal impact to your applications.

PBG and PBR universal table spaces (UTS) are the strategic table space types for tables in Db2 for z/OS®. PBG table spaces are the default UTS type, and they are well-suited for small to medium-sized tables. However, if an existing table in a PBG table space grows too large, performance degradation or data and index management issues might arise. Consider converting from PBG to PBR when that occurs.

To complete the conversion, you issue an ALTER TABLE statement with the new ALTER PARTITIONING TO PARTITION BY RANGE clause and run the REORG TABLESPACE utility to materialize the pending change. The table space for the table is converted to PBR with relative page numbering (RPN).

For more information, see Converting tables from growth-based to range-based partitions and ALTER PARTITIONING TO PARTITION BY RANGE in ALTER TABLE statement.

Fast index traversal (FTB) support for larger index keys
Function level 500 extends FTB support to unique indexes with a key size for the ordering columns up to 128 bytes and nonunique indexes with a key size up to 120 bytes. For more information, see Fast index traversal.
Increased control for applications over how long to wait for a lock

Function level 500 introduces the CURRENT LOCK TIMEOUT special register and the SET CURRENT LOCK TIMEOUT SQL statement to allow the lock timeout value to be set at the application level. So, you can set a lock timeout interval that suits the needs of a specific application, or even an individual SQL statement. Doing so minimizes application lock contention and simplifies portability of applications to Db2, without the need to assign the application to a separate Db2 subsystem. Applications must run at APPLCOMPAT level V13R1M500 or higher to issue the SET CURRENT LOCK TIMEOUT statement.

The value of the CURRENT LOCK TIMEOUT special register overrides the value of the IRLMRWT subsystem parameter. It applies to certain processes related to locking, like the claim or drain of an object and cached dynamic statement quiescing. You can limit use of CURRENT LOCK TIMEOUT by setting the new SPREG_LOCK_TIMEOUT_MAX subsystem parameter. The default value is -1, which means no limit. For more information, see LOCK TIMEOUT MAX (SPREG_LOCK_TIMEOUT_MAX subsystem parameter).

For more information, see CURRENT LOCK TIMEOUT special register and SET CURRENT LOCK TIMEOUT statement.

You can also use Db2 profile tables to specify an assignment for the CURRENT LOCK TIMEOUT special register, for both remote and local threads. See Setting special registers by using profile tables.

Profile table enhancements for application environment settings

Db2 13 introduces the capability to use system profiles for local applications in certain situations. Previously, the initial values for special registers and system built-in global variables can be specified in the Db2 profile tables, but they are used only for initialization with distributed threads. The new Db2 profile table support for local applications requires Db2 to be started with the DDF subsystem parameter set to AUTO or COMMAND. See DDF STARTUP OPTION field (DDF subsystem parameter).

Starting in function level 500, Db2 profile tables can now be used for both local and remote applications in the following situations:

Ability to delete an active log data set from the BSDS while Db2 is running
Function level 500 introduces the new REMOVELOG option for the -SET LOG command to support online removal of an active log data set from the BSDS, eliminating the need to stop Db2 to accomplish the task by using the offline utility DSNJU003. The -SET LOG REMOVELOG command deletes the specified log from the BSDS if it is not in use or mark the log REMOVAL PENDING if it is in use.

To provide monitoring of the current active log status for log data sets with REMOVAL PENDING status, function level 500 also introduces the DETAIL option for the -DISPLAY LOG command. It shows information regarding REMOVAL PENDING status for local active log data sets. The output from the utility DSNJU004 also shows the REMOVAL PENDING status where applicable.

For more information, see Deleting an active log data set from the BSDS with the -SET LOG command.

SPT01 and SYSLGRNX table spaces are converted to DSSIZE 256 GB
Starting in function level 500, the first time that the REORG TABLESPACE utility runs for the following directory objects, it converts the DSSIZE to 256 GB.
  • DSNDB01.SPT01 to resolve issues that are related to the removal of the SPT01_INLINE_LENGTH subsystem parameter by APAR PH24358 in Db2 12.
  • DSNDB01.SYSLGRNX in anticipation of future growth in this table for increasing workloads and conversions of non-UTS table space to UTS.

The conversion is automatic and does not require any special utility syntax. It updates the following Db2 catalog table values for each table space:

  • The SYSIBM.SYSTABLESPACE and SYSIBM.SYSTABLEPART catalog tables are updated with DSSIZE = '256G'.
  • A SYSCOPY record is inserted for the table space, with the following values to indicate that REORG changed the DSSIZE: ICTYPE = 'A', STYPE = 'D', TTYPE = '64G'. In this situation, the TTYPE value records the previous DSSIZE.

If function level 100* is activated, already converted table spaces continue to use the larger DSSIZE, but the REORG utility does not convert unconverted table spaces.

Recovery to a point-in-time (PIT) before REORG converted the DSSIZE reverts the DSSIZE to 64GB. As always, if any one of the catalog or directory objects are recovered to a prior PIT, it is best to recover all catalog and directory objects to the same PIT.

Improved concurrency for altering tables for DATA CAPTURE

Function level 500 introduces a concurrency improvement for ALTER TABLE statements that change the DATA CAPTURE attribute of tables. With this enhancement, Db2 no longer waits for other statements that depend on the altered table to commit. As a result, the DATA CAPTURE alteration can now succeed even when concurrent statements are running continually against the table. Start of changeThe improved concurrency is available to applications that issue ALTER TABLE statements at APPLCOMPAT level V13R1M500 or higher.End of change

Earlier Db2 releases quiesce the following objects that depend on the altered table as part of the DATA CAPTURE alteration:

  • Static packages
  • Cached dynamic SQL statements

Because the DATA CAPTURE alteration waited for applications that depended on the altered table to commit, continuous concurrent activity on the table might cause the ALTER TABLE statements to fail.

The new DATA CAPTURE attribute now takes effect immediately when the processing completes, even before the ALTER statement commits. As a result, concurrent statements on the same Db2 member might write out different log formats in the same transaction. For more information, see Altering a table to capture changed data.

Change REORG INDEX SHRLEVEL REFERENCE or CHANGE so the NOSYSUT1 keyword is the default
Starting at function level 500, the NOSYSUT1 keyword is the default for the REORG INDEX utility when specified with the SHRLEVEL REFERENCE or CHANGE keywords. So, the utility avoids use of a work data set, which improves performance and allows REORG INDEX to use parallel subtasks to unload and to build index keys. The default value of the REORG_INDEX_NOSYSUT1 subsystem parameter is also changed from NO to YES, and YES is now the only option. So, this subsystem parameter no longer influences the behavior of REORG INDEX.

For more information, see Syntax and options of the REORG INDEX control statement and REORG TS NOPAD DEFAULT (REORG_TS_NOPAD_DEFAULT subsystem parameter).

CREATE TABLESPACE uses MAXPARTITIONS 254 by default

At APPLCOMPAT level V13R1M500 or higher, CREATE TABLESPACE statements use MAXPARTITIONS 254 by default.

When MAXPARTITIONS 256 is explicitly specified, the default DSSIZE varies from 4 G to 32 G depending on the page size. However, starting with application compatibility level V12R1M504, when MAXPARTITIONS is not explicitly specified, Db2 12 use MAXPARTITIONS 256 by default, but the default DSSIZE is always 4 G regardless of the page size.

This apparent inconsistency avoided a risk of failure for existing statements, where the default data set size might be greater than 4 G depending on the page size. The statements might fail with the SQLCODE -904 error with the 00D70008 reason code if the data sets for the table space are not associated with a DFSMS data class that is specified with extended format and extended addressability.

With MAXPARTITIONS 254 as the default, the result is now consistent regardless of whether MAXPARTITONS is explicitly specified. The calculated default DSSIZE is always 4 G.

Start of changeThis change is potentially an incompatible change when applications first start running at APPLCOMPAT level V13R1M500 or higher. For more information about changes like this, see Incompatible changes for APPLCOMPAT levels in Db2 13.End of change

See the MAXPARTITIONS and DSSIZE descriptions in CREATE TABLESPACE statement.

Long names support for timeout and deadlock messages in IRLM

Starting at function level 500, Db2 13 introduces IRLM support for long names for client information such as workstation ID, user ID, and transaction ID, in deadlock and timeout messages DSNT175I and DSNT376I.

In Db2 12 and earlier, the long name values are truncated in the message output.

With this change, Db2 13 also starts populating existing long name fields in IFICID 172 and IFCID 196 records. These fields remain unpopulated in Db2 12 and earlier.

New-function APARS for function level 500 or higher

The following capabilities that are introduced by new-function APARs after the general availability of Db2 13 take effect when you activate function level 500 with the PTF applied, or immediately if you apply the PTF at any higher function level.

Enhanced authorization and authentication in SQL Data Insights

APARs PH66445, PH66446, and PH66267 (June 2025) introduce important security enhancements to Db2 SQL Data Insights (SQL DI). With the security updates, you can use Db2 secondary authorization IDs to authorize SQL DI users for object and model management. You can also use RACF® PassTickets or authentication token files to access Db2 from the shell CLI or the REST API.

Currently, you grant SQL DI users the permissions for object and model management by running the DSNTIJAI sample JCL job in Db2. You must run the sample job to authorize each individual user one at a time, which is laborious and time consuming. After applying the updates, you have the option to grant the permissions to a Db2 secondary authorization ID in the JCL job. Then, specify the same secondary authorization ID in SQL DI when enabling an object for AI query. After the enablement, the specified secondary authorization ID, instead of the primary authorization ID of the individual user, owns the model table and index of the object. Users associated with the secondary authorization ID are automatically authorized all at the same time to manage the object and the model.

In addition, you currently access SQL DI and connect from SQL DI to Db2 by using a credentials file. The credentials file stores your user ID and password. With the credentials file, you do not need to enter your user ID and password manually and repeatedly whenever you access SQL DI or Db2. But each connection request still transmits the encrypted password across the network. After applying the updates, you have the option of using either an authentication token file to access SQL DI or a RACF PassTicket to connect to Db2, which eliminates the need to store the password and reduce the frequency to transmit it across the network.

For more information, see the following related topics:

MODIFY RECOVERY utility DELETEDS override

APAR PH65233 (May 2025) introduces a new Db2 subsystem parameter, UTIL_MODIFY_REC_DELETEDS, that prevents the MODIFY RECOVERY utility from deleting image copy data sets when the DELETDS option is specified.

The default for the new UTIL_MODIFY_REC_DELETEDS subsystem parameter is 'ALLOW,' which indicates that the DELETEDS option is allowed for the MODIFY RECOVERY utility. A Db2 system administrator can specify the 'IGNORE' option for UTIL_MODIFY_REC_DELETEDS. If the 'IGNORE' option is specified for UTIL_MODIFY_REC_DELETEDS, the DELETEDS option for the MODIFY RECOVERY utility will be ignored, even if specified. The UTIL_MODIFY_REC_DELETEDS setting takes effect when function level 500 or higher is activated in Db2 13.

Related function levels for this APAR: FL 500 New function in this APAR takes effect after the PTF is applied and function level 500 or higher is activated; Activating function level 508 or higher verifies that this APAR is applied.

For more information, see the following related topics:

Ability to store the expansion dictionary in the compression dictionary data set

Starting in Db2 13 with APAR PH64099 (May 2025) and function level 500 or higher, you can store the expansion dictionary in the compression dictionary data set (CDDS). The expansion dictionary is used to decompress compressed table space data that is returned in log records. The table space data can be compressed with either the fixed-length algorithm or the Huffman algorithm.

The ability to store the expansion dictionary in the CDDS can improve performance and availability for replication applications that use one of the following methods:
  • IFI READS calls for IFCID 306
  • IBM® Integrated Synchronization in IBM Db2 Analytics Accelerator for z/OS Version 8.1 or later

Before this APAR and its pre-conditioning APARs, the expansion dictionary was stored in the table space whose data was being returned in log records. Storing the expansion dictionary in the CDDS can improve availability of the expansion dictionary.

Storing expansion dictionaries in the CDDS has the following advantages for decompressing log records for replication applications:
  • Compressed table spaces that are referenced in retrieved log records do not need to be opened when the table space data is decompressed, which has the following advantages:
    • In a data sharing environment, when the table spaces are not open, they do not become GBP-dependent.
    • Decompression failures because the table spaces are in the STOP state do not occur.
  • Db2 does not need to obtain a DBD lock or claim on the referenced table spaces. This lessens the occurrence of serialization issues with concurrently running data definition statements or utilities.
  • Db2 does not need to access expansion dictionaries from the logs. There is less performance degradation due to retrieval of the logs, especially when archive logs are on tape.

Related function levels for this APAR: FL 500 New function in this APAR takes effect after the PTF is applied and function level 500 or higher is activated; tbd The PTF for this APAR is expected to be verified by activating a to-be-determined future function level.

For more information, see the following related topics:

New REST API and shell CLI for SQL Data Insights server administration and AI object-related management

APARs PH64220 and PH64221 (January 2025) introduce the RESTful application programming interface (API) and the shell command-line interface (CLI) of SQL Data Insights (SQL DI). The new interfaces enable you to use OpenAPI-compliant REST API requests and shell CLI commands to administer SQL DI server settings and manage its connections, AI objects, and object models.

Without this update, you manually perform SQL DI server administration and object-related management tasks by using the web user interface (UI) only. After applying the update, you can automate these tasks with the REST API, the shell CLI, or both.

Related function levels for this APAR: FL 500 New function in this APAR takes effect after the PTF is applied and function level 500 or higher is activated; FL 507 Activating function level 507 or higher verifies that this APAR is applied.

For more information, see the following related topics:

COPY utility zIIP support

APAR PH63832 (December 2024) enhances some of the COPY utility processing to be zIIP eligible. This enhancement can help reduce CPU costs associated with creating backups.

Related function levels for this APAR: FL 500 New function in this APAR takes effect after the PTF is applied and function level 500 or higher is activated; FL 507 Activating function level 507 or higher verifies that this APAR is applied.

New model retraining capability of SQL Data Insights

APAR PH60870 (May 2024) introduces the new model retraining capability of SQL Data Insights (SQL DI). When you enable an object for AI query, SQL DI trains a neural network model based on the data you select. As the data change, the accuracy of the model may degrade over time. After the application of this APAR, you can retrain the model whenever necessary.

For more information, see the following related topics:

Dynamic election of SQL Data Insights vector prefetch on tables

APAR PH55212 (August 2023) enhances the SQL Data Insights vector prefetch capability by making the enablement decision dynamic based on data and AI cache availability. If the MXAIDTCACH parameter is set to a value in the range of 1–512 and if a query invokes a SQL DI function on a table, Db2 dynamically chooses between vector prefetching and row-by-row processing based on the range of rows qualified for the AI object and the specified AI cache size. If the query invokes a SQL DI function on a view, Db2 uses vector prefetching.

This APAR also delivers Db2 support of numeric data types for the AI_ANALOGY function and introduces the new built-in AI_COMMONALITY function. See Enhancements to SQL Data Insights for more information about the new AI_COMMONALITY function.

Related function levels for this APAR: FL 100 New function in this APAR takes effect after the PTF is applied at any function level; FL 504 Activating function level 504 or higher verifies that the PTF for this APAR is applied.

New vector prefetch capability and improved built-in scalar functions of SQL Data Insights

APAR PH51892 (February 2023) introduces a new vector prefetch capability and improves the existing built-in scalar AI_SEMANTIC_CLUSTER function, which together with other optimizations, delivers significant functional and performance improvements for SQL Data Insights (SQL DI).

Vector prefetch is the advanced process that SQL DI uses to upload numeric vectors to the IBM Z® AI Optimization Library (zAIO) for calculating similarity scores. The existing query processing architecture forces Db2 to submit one record at a time to z/OS for processing. Prefetching enables Db2 AI functions to submit multiple vectors in a batch at a time, which greatly accelerates SQL DI query processing.

Db2 stores training vectors in a normalized form for all scoring requests. However, the AI_SEMANTIC_CLUSTER function requires non-normalized vectors when it computes the average of the input vectors. To meet this requirement, Db2 adds a new NORMALIZE_ABSOLUTE_VALUE column to the vector table and populates it with an absolute value during model training. The AI_SEMANTIC_CLUSTER function uses the value in the new column to convert the normalized vectors to non-normalized, calculate the input average, and convert the non-normalized vectors back to normalized. The addition of the new column helps improve not only the scoring accuracy of the AI_SEMANTIC_CLUSTER function but also the overall scoring performance of the SQL DI feature.

Related function levels for this APAR: FL 500 New function in this APAR takes effect after the PTF is applied and function level 500 or higher is activated; FL 506 Activating function level 506 or higher verifies that the PTF for this APAR is applied.

For more information, see the following related topics:

Removed stacking limitations for PBG to PBR conversions

Starting in function level 500 or higher with APAR PH51359 (December 2022), Db2 13 supports stacking of certain pending data definition changes when a table in a partition-by-growth (PBG) table space is converted to partition-by-range (PBR). That is, the pending definition changes in following table can now be issued together and materialized by a single execution of the REORG utility. This capability is especially useful if you need to enlarge the partition data set sizes to accommodate the distribution of data into the partitions, alter the columns to be used as partitioning keys, or alter other table space or index attributes.

Object level Supported stacked pending definition changes for PBG to PBR conversion
Table space
  • BUFFERPOOL
  • DSSIZE
  • SEGSIZE (excluding conversion to UTS)
  • MEMBER CLUSTER
Table
  • ALTER COLUMN
  • DROP COLUMN
Index
  • BUFFERPOOL
  • COMPRESS

Before this change, Db2 issues SQLCODE -20385 if you try to issue any of these alterations when a PBG to PBR conversion is pending, or in the opposite situation, so at least two executions of the REORG utility are required to complete any of these changes if they are needed for the conversion.

Related function levels for this APAR: FL 500 New function in this APAR takes effect after the PTF is applied and function level 500 or higher is activated; FL 504 Activating function level 504 or higher verifies that the PTF for this APAR is applied.

For more information, see the following related topics:

Reduced LOGREC entries for parallel queries with FETCH FIRST n ROWS

Starting in Db2 13 function level 500 or higher, APAR PH48183 (September 2022) reduces the number of EREP LOGREC entries with reason code 00E50013 for parallel queries that specify FETCH FIRST n ROWS. The new logic applies for applications that run at APPLCOMPAT level V13R1M500 or higher.

If a query that specifies FETCH FIRST n ROWS runs in parallel, the parallel child tasks continue fetching more records as the parent retrieves and returns them to the application. As soon as n rows are returned, the parent sends a "stop" message to the child tasks. When each child receives the stop message, it stops processing as if canceled. Before this APAR, each child task also writes out an EREP LOGREC entry with reason code 00E50013 in this situation.

With APAR PH48183 applied, the parallel child tasks can stop processing sooner and avoid issuing the EREP LOGREC entries with 00E50013.

Related function levels for this APAR: FL 500 New function in this APAR takes effect after the PTF is applied and function level 500 or higher is activated; FL 504 Activating function level 504 or higher verifies that the PTF for this APAR is applied.

For more information, see the following related topics:

Improved DBAT status for MONITOR THREADS profiles in DISPLAY THREAD output
Starting in Db2 13 at function level 500 or higher, APAR PH47626 improves the status values in DISPLAY THREAD output for DBATs that are queued because the MAXDBAT subsystem parameter or the except threshold for a MONITOR THREADS profile was reached. This APAR introduces the new status value RS in DSNV402I to indicate a thread that is suspended because a MONITOR THREADS profile was reached. The existing RQ value is also updated to indicate that it only applies that threads that are suspended because the MAXDBAT value was reached.

This APAR also adds a new counter in the output for the DISPLAY DDF command with the DETAIL option. The new PQDBAT counter in message DSNL093I indicates the current number of DBATs queued because a system profile exception threshold was reached.

Related function levels for this APAR:

Related function levels for this APAR: FL 504 Activating function level 504 or higher verifies that the PTF for this APAR is applied.; FL 500 New function in this APAR takes effect after the PTF is applied and function level 500 or higher is activated

For more information, see the following related topics:

Improved DBAT status for MONITOR THREADS profiles in DISPLAY THREAD output
Starting in Db2 13 at function level 500 or higher, APAR PH47626 improves the status values in DISPLAY THREAD output for DBATs that are queued because the MAXDBAT subsystem parameter or the except threshold for a MONITOR THREADS profile was reached. This APAR introduces the new status value RS in DSNV402I to indicate a thread that is suspended because a MONITOR THREADS profile was reached. The existing RQ value is also updated to indicate that it only applies that threads that are suspended because the MAXDBAT value was reached.

This APAR also adds a new counter in the output for the DISPLAY DDF command with the DETAIL option. The new PQDBAT counter in message DSNL093I indicates the current number of DBATs queued because a system profile exception threshold was reached.

Related function levels for this APAR:

Related function levels for this APAR: FL 504 Activating function level 504 or higher verifies that the PTF for this APAR is applied.; FL 500 New function in this APAR takes effect after the PTF is applied and function level 500 or higher is activated

For more information, see the following related topics:

V13R1M500 application compatibility

Most new SQL syntax and behaviors introduced by this function level become available when applications run at the equivalent application compatibility (APPLCOMPAT) level or higher. Otherwise, the result is a SQL code error such as -4743, or sometimes a previous behavior continues as before. For more information, see the following topics:

How to activate function level 500

Before you begin

Before you activate function level 500 or higher in Db2 13, you must complete the following tasks:

  1. In Db2 12, identify and resolve incompatible changes and activate function level 510 (V12R1M510). You can run the DSNTIJPE premigration queries job in Db2 12 to identify the incompatible changes. For more information, see Activate function level 510 in Db2 12.
  2. Verify that every member was restarted with the fallback SPE applied in Db2 12.
    Important: Inactive members that never started in Db2 12 with the fallback SPE (APAR PH37108) applied cannot start after the first data sharing member is migrated to Db2 13 at function level 100.
  3. Start of changeIf your Db2 environment contains PBR RPN table spaces, APAR PH61633 is an important PE fix that you must apply on all data sharing members before migrating to or activating function levels in Db2 13.
    Attention: If your Db2 environment contains PBR RPN table spaces, do not migrate any data sharing members to function level V13R1M100, or activate function level V13R1M500 or higher, until you apply the PTF for PH61633 on all data sharing members and complete all ++HOLD actions. For more information, see the 21 June 2024 Red Alert.
    End of change
  4. Migrate the Db2 subsystem or data sharing group to Db2 13, as described in Migrating a Db2 for z/OS subsystem to Db2 13 at function level 100 or Migrating an existing data sharing group to Db2 13.
  5. Verify that you no longer need to fall back to Db2 12.
    Important: After function level 500 is activated in Db2 13, coexistence and fallback to Db2 12 are no longer possible. You can activate function level 100* to disable new capabilities in Db2 13, but function level 100* does not support coexistence or fallback .
  6. In data sharing, ensure that the group has no active Db2 12 members. See Migrating subsequent members of a group to Db2 13.
Procedure

To activate function level 500, complete the following steps:

  1. Generate tailored JCL jobs for the CATMAINT and function level activation steps. Start of changeYou can use the DSNTIJBC batch job or the Db2 installation CLIST.End of change
    Tip: You can avoid working through the Db2 installation CLIST panels in interactive mode by running a batch job with valid input files to generate the required JCL jobs and input files with a background process. See Generating tailored Db2 migration or function level activation jobs in the background.
    Start of changeTo generate the required JCL jobs and input files with a background process, complete the following steps:End of changeStart of change
    1. Customize the DSNTIDOA parameter override file by following the instructions in the file.
    2. Customize the DSNTIJBC job. For example, if prefix.SDSNSAMP(DSNTIDOA) is the customized parameter override file, you can specify the following values in the ISPSTART command in DSNTIJBC.
        ISPSTART CMD(%DSNTINSB + 
          OVERPARM(prefix.SDSNSAMP(DSNTIDOA)) + 
          ) BREDIMAX(1)
    3. If you use Db2 Value Unit Edition, you must also provide the data set name of the DSNTIDVU parameter override file in the IPSTART command in the DSNTIJBC job, as shown in the following example, where prefix.SDSNSAMP(DSNTIDVU) is the customized OTC LICENSE file.
        ISPSTART CMD(%DSNTINSB + 
          OVERPARM(<prefix>.SDSNSAMP(DSNTIDOA)) + 
          OTCLPARM(<prefix>.SDSNSAMP(DSNTIDVU)) + 
          ) BREDIMAX(1)
    4. Submit the customized DSNTIJBC job.
    End of change
    Start of changeTo generate the required JCL jobs and input files with the Db2 installation CLIST in interactive mode, complete the following steps:End of change
    1. In panel DSNTIPA1, specify INSTALL TYPE ===> ACTIVATE. Then, specify the name of the output member from the previous function level activation (or migration) in the INPUT MEMBER field, and specify a new member name in the OUTPUT MEMBER field.
    2. In panel DSNTIP00, specify TARGET FUNCTION LEVEL ===> V13R1M500. The Db2 installation CLIST uses this value when it tailors the ACTIVATE command in the DSNTIJAF job and the CATMAINT utility control statement in the DSNTIJTC job. (Function level 500 uses catalog level 100, and the tailored DSNTIJTC job is not used.)
    3. Proceed through the remaining Db2 installation CLIST panels, and wait for the Db2 installation CLIST to tailor the jobs for the activation process. The output data set contains the tailored jobs for the activation process. For more information, see The Db2 installation CLIST panel session.
  2. Check that Db2 is ready for function level activation by issuing the following ACTIVATE command with the TEST option:
    -ACTIVATE FUNCTION LEVEL (V13R1M500) TEST
    Db2 issues message DSNU757I to indicate the results. For more information, see Testing Db2 function level activation.
  3. Run the tailored DSNTIJAF job, or issue the following ACTIVATE command:
    -ACTIVATE FUNCTION LEVEL (V13R1M500)
  4. If you are ready for applications to use new SQL capabilities in this function level, rebind the applications at the equivalent application compatibility level for higher. For more information, see the following topics:

    Optionally, when you are ready for all applications to use the new capabilities of the target function level, you can run the following jobs:
    1. Run DSNTIJUZ to modify the subsystem parameter module with the APPLCOMPAT value that was specified on panel DSNTIP00.
    2. Run DSNTIJOZ job to issue SET SYSPARM command to bring the APPLCOMPAT subsystem parameter changes online.
    3. Run DSNTIJUA job to modify the Db2 data-only application defaults module with the SQLLEVEL value that was specified on panel DSNTIP00.
What to do next
To activate new capabilities with catalog dependencies in Db2 13, activate function level 501 or higher. See Function level 501 (Db2 13 installation or migration - May 2022).