IBM Support

Improving impact analysis accuracy for EXEC SQL FETCH statements in COBOL and PL/I programs

Troubleshooting


Problem

For a COBOL or PL/I program that uses EXEC SQL FETCH and is analyzed by the proprietary scanner, an impact analysis project that traces through the FETCH statement does not correctly identify any of the data elements on that statement as being modified.

Symptom

When impact analysis is performed, and the analysis traces through an EXEC SQL FETCH statement, the data items named on the FETCH statement will be updated with the data as defined by the select list in the associated cursor.  Prior to the V6.1.0.21 fix pack, there was insufficient data stored from the analysis to allow the connection to be made between DB2 columns being fetched and the data items receiving the data.

Resolving The Problem

As of the V6.1.0.21 fix pack, additional details are stored in the database when a COBOL or PL/I program is analyzed using the proprietary scanner, and an EXEC SQL FETCH statement is encountered.  

For Rational Asset Analyzer: 

  • If you have the V6.1.0.17 fix pack or earlier, migrate your database with the latest migration utility prior to installing the V6.1.0.21 fix pack.  If you already downloaded the V6.1.0.21 fix pack prior to August 16, 2019, you should download it again from Fix Central and use the newer file for installing.
  • If you have the V6.1.0.18 fix pack or later, install the V6.1.0.21 fix pack.  If you already downloaded the V6.1.0.21 fix pack prior to August 16, 2019, you should download it again from Fix Central and use the newer file for installing, even if you had previously installed a V6.1.0.21 fix pack.

For Rational Asset Analyzer for System z: 

  • If you have the V6.1.0.17 fix pack or earlier, migrate your database with the latest migration utility prior to installing the V6.1.0.21 fix pack.  Once the V6.1.0.21 fix pack is installed, contact IBM for a test fix with the updated AESPANAL load module.
  • If you have the V6.1.0.18 fix pack or later, install the V6.1.0.21 fix pack.  Once the V6.1.0.21 fix pack is installed, contact IBM for a test fix with the updated AESPANAL load module.

In order to have that additional detail for a program, the program must be reanalyzed.

Step 1 – Update the product runtime

Install the latest V6.1.0.21 runtime from August 16, 2019 or later as described above.

Step 2 – List the affected files / programs

From the web UI, navigate to Database -> Execute a Query, paste the following query, and click Submit.  When the results are listed, click on the Export to CSV icon in the upper righthand corner of the result list to keep a list of the impacted programs.
WITH FILES AS (
 SELECT DISTINCT f.file_name, f.file_id, f.language_cd, cu.comp_unit_name
  FROM $(SCHEMA).dmh_cu_stmt cus
  JOIN $(SCHEMA).dmh_stmt_type stype ON stype.stmt_type_id = cus.stmt_type_id AND cus.stmt_type_id = 7043
  JOIN $(SCHEMA).dmh_compile_unit cu ON cu.comp_unit_id = cus.comp_unit_id
  JOIN $(SCHEMA).dmh_file f ON f.file_id = cu.member_id
  JOIN $(SCHEMA).dmh_num_property actualScanWps ON actualScanWps.cmpnt_type_id = 9
                                         AND actualScanWps.related_id_1 = f.file_id
                                         AND actualScanWps.attr_type_id IN(191, 194)
                                         AND actualScanWps.num_value = 1
  ORDER BY 1, 2
SORTED_FILES AS (
  SELECT ROW_NUMBER() OVER() AS ROW_NUM, f.* FROM FILES f
SELECT f.* FROM SORTED_FILES f
 

Step 3 – Queue the programs for analysis

For RAA, copy the following two INSERT statements to a file (eg. C:\temp\raa_aq.sql), change $(SCHEMA) to your schema name, and then execute these commands from a DB2 Command Prompt window, using your database name in place of DMHDB:

db2 connect to DMHDB
db2 -td; -vf C:\temp\raa_aq.sql
db2 connect reset
For RAAz, use SPUFI to execute the following two INSERT statements after changing $(SCHEMA) to your schema name for the product tables.

See the note at the end of this document for details about processing groups of programs.

First INSERT

INSERT INTO $(SCHEMA).dmh_analysis_queue
 WITH FILES AS (
  SELECT DISTINCT f.file_name, f.file_id, f.language_cd, cu.comp_unit_name
   FROM $(SCHEMA).dmh_cu_stmt cus
   JOIN $(SCHEMA).dmh_stmt_type stype ON stype.stmt_type_id = cus.stmt_type_id AND cus.stmt_type_id = 7043
   JOIN $(SCHEMA).dmh_compile_unit cu ON cu.comp_unit_id = cus.comp_unit_id
   JOIN $(SCHEMA).dmh_file f ON f.file_id = cu.member_id
   JOIN $(SCHEMA).dmh_num_property actualScanWps ON actualScanWps.cmpnt_type_id = 9
                                          AND actualScanWps.related_id_1 = f.file_id
                                          AND actualScanWps.attr_type_id IN(191, 194)
                                          AND actualScanWps.num_value = 1
   ORDER BY 1, 2
 SORTED_FILES AS (
    SELECT ROW_NUMBER() OVER() AS ROW_NUM, f.* FROM FILES f
 NEW_AQ AS (
   SELECT f.row_num + (SELECT VALUE(MAX(analysis_queue_id), 0) FROM $(SCHEMA).dmh_analysis_queue) AS analysis_queue_id
  , f.file_id AS related_id_1
  , 0 AS related_id_2
  , 8 AS analysis_type_cd
  , 9 AS cmpnt_type_id_1
  , 0 AS cmpnt_type_id_2
  , CURRENT TIMESTAMP AS create_timestamp
  , 1000001 AS create_tool_id
  , (SELECT user_id FROM $(SCHEMA).dmh_user WHERE user_name = USER)  AS create_user_id
     FROM SORTED_FILES f
     WHERE NOT EXISTS(SELECT 1
                      FROM $(SCHEMA).dmh_analysis_queue q
                      WHERE q.analysis_type_cd = 8 AND q.cmpnt_type_id_1 = 9 AND q.related_id_1 = f.file_id)
        AND f.row_num BETWEEN 1 AND 1000000
 SELECT f.* FROM NEW_AQ f;
 

Second INSERT

INSERT INTO $(SCHEMA).dmh_analysis_parms
 WITH AQ2FIX AS (
  SELECT aq.*
   FROM $(SCHEMA).dmh_analysis_queue aq
   WHERE aq.analysis_type_cd = 8 AND aq.cmpnt_type_id_1 = 9
     AND NOT EXISTS(SELECT 1 FROM $(SCHEMA).dmh_analysis_parms ap
                    WHERE ap.analysis_queue_id = aq.analysis_queue_id)
 ),
 NEW_AQ_PARM AS (
    SELECT aq.analysis_queue_id, 'FORCE_RESCAN' AS parm_key, 'Y' AS parm_value
      FROM AQ2FIX aq
    UNION ALL
    SELECT aq.analysis_queue_id, 'SITE' AS parm_key, site.name AS parm_value
      FROM AQ2FIX aq
      JOIN $(SCHEMA).dmh_file f ON f.file_id = aq.related_id_1
      JOIN $(SCHEMA).dmh_container cnr ON cnr.container_id = f.container_id
      JOIN $(SCHEMA).dmh_site site ON site.site_id = cnr.site_id
 )
 SELECT f.* FROM NEW_AQ_PARM f;
 

Step 4 – Perform the analysis

Start the postprocessor to analyze the files that you’ve queued.

Step 5 – Verify the results

From the web UI, navigate to Database -> Execute a Query, paste the following query, and click Submit.  When the results are listed, there should be rows for PHRASE_TYPE_ID 1004 and 1018.  The 1004 row has the count of data elements that are the target of a FETCH INTO, and it should have a non-zero result.  The 1018 row has a count of the data elements that are used as indicators that are the target of a FETCH INTO, and it should have a non-zero result if indicators are used in your FETCH statements.

SELECT cdu.phrase_type_id, COUNT(*) AS count
 FROM $(SCHEMA).dmh_cu_data_use cdu
 JOIN $(SCHEMA).dmh_cu_stmt cus ON cus.comp_unit_id = cdu.comp_unit_id
                               AND cus.source_line_no = cdu.source_line_no
                               AND cus.verb_column_no = cdu.verb_column_no
                               AND cus.stmt_type_id = 7043
 WHERE cdu.phrase_type_id IN(1004, 1018) GROUP BY cdu.phrase_type_id
ORDER BY 1
 

Note - Processing groups of files

If you want to process the requests to reanalyze files in smaller groups, you could iteratively queue up groups of files in Step 3 and process them in Step 4.  To do that, you would modify the first INSERT statement in Step 3 so that the predicate “f.row_num BETWEEN 1 AND 1000000” includes the desired row numbers (from viewing the query results in Step 2), and then execute it.   For example, assume there were 3000 files reported in Step 2, and you wanted to process them in groups of 1000.  You’d iteratively run Step 3 and Step 4:

  1. Update the first INSERT in Step 3 using “f.row_num BETWEEN 1 AND 1000”, and execute it.
  2. Execute the second INSERT in Step 3 unchanged.
  3. Perform the analysis in Step4.
  4. Update the first INSERT in Step 3 using “f.row_num BETWEEN 1001 AND 2000”, and execute it.
  5. Execute the second INSERT in Step 3 unchanged.
  6. Perform the analysis in Step4.
  7. Update the first INSERT in Step 3 using “f.row_num BETWEEN 2001 AND 3000”, and execute it.
  8. Execute the second INSERT in Step 3 unchanged.
  9. Perform the analysis in Step4.

For ease in copying the SQL text, it is attached here: sql.txt

Document Location

Worldwide

[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SS3JHP","label":"Rational Asset Analyzer"},"Component":"Documentation","Platform":[{"code":"PF033","label":"Windows"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB35","label":"Mainframe SW"}},{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSHSRW","label":"Rational Asset Analyzer for System z"},"Component":"Documentation","Platform":[{"code":"PF035","label":"z\/OS"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB35","label":"Mainframe SW"}}]

Product Synonym

RAA;RAAz

Document Information

Modified date:
19 August 2019

UID

ibm10967810