Using DB2 for z/OS pureXML to process SEPA transactions

A practical guide to performance improvement based on customer experiences

Single Euro Payments Area (SEPA) is a payment integration mechanism widely used in Europe to handle standard payment messages. Each SEPA document can contain up to 100,000 credit transfer transactions and must be processed accurately in a short period of time. This article outlines the author's experience in performing a benchmark for a large European corporate and investment bank using DB2® for z/OS® pureXML® for SEPA (ISO 20022). Learn how the team resolves obstacles and eventually achieves their performance goal.

Jane Man (janeman@us.ibm.com), Advisory Software Engineer, IBM

Jane Man author photoJane Man is an advisory software engineer and one of the original members of the XML development team in DB2 for z/OS. She has worked on various features of DB2 for z/OS — XML schema registration and validation, XML parsing, XML serialization, XML indexes, binary XML support, CHECK DATA (in XML area), and others. In addition to her development work, she is the enablement focal point in the team and is involved in many XML enablement activities, such as creating XML sample applications, demos, and hands-on labs, and presenting in conferences and XML bootcamps. Before joining the XML team, Jane was a developer for IBM Content Manager. Jane is an IBM Certified System Administrator for WebSphere Application Server, IBM Certified Database Administrator for DB2 for z/OS, IBM Certified Database Administrator for DB2 for Linux, UNIX and Windows, IBM Certified Solution Designer for DB2 Content Manager, IBM Certified Deployment Profession for Tivoli Storage Manager, and IBM Certified Application Developer for DB2 Database Family.



31 January 2013

Introduction

What is SEPA/ISO 20022?

The payment integration initiative known as Single Euro Payments Area (SEPA) adopts ISO 20022 payment standard messages and usage rules for simplification of bank transfers within the European Union. ISO 20022, the universal financial industry message scheme (known as "UNIFI") is the international standard that defines the ISO platform for the development of financial message standards. In this benchmark, our customer required us to use the pacs.008 (payments clearing and settlement) schema defined in ISO 20022, which describes the format that financial institutions use for exchanging customer credit transfers.

The Resources section below contains a link to more information on ISO 20022.

What is DB2 for z/OS pureXML?

DB2 pureXML provides the capability to store, validate, and search XML data. Users no longer need to store XML data in a CLOB or BLOB column or shred the XML data to fields and store in relational columns. Instead, using pureXML, you can directly insert XML documents into an XML column. The XML data is physically stored in an XML table, not in the base table.

The Resources section contains a link to more information on DB2 for z/OS pureXML.

In this article

This article demonstrates how you can accomplish the following in an XML SEPA application environment:

  1. Simplify application logic using pureXML
  2. Use XMLTABLE function to decompose an XML document
  3. Use XML publishing functions to construct a new XML document
  4. Improve performance by re-writing the query.

Background for the scenario

The customer is a major corporate and investment bank in Europe. They receive hundreds of XML SEPA documents during the day. Each XML SEPA file contains between one and 100,000 payment requests (credit transfer transactions), and most files contain between 2000 and 6000 payment requests. File sizes range from 10KB to 500MB, with a medium size of 4MB. Listing 1 shows an example of the XML SEPA document. Each document consists of 2 major parts: one <GrpHdr> element and then followed by one to many <CdtTrfTxInf> elements. Each <CdtTrfTxInf> element refers to one single payment request.

Their goal is to parse the XML, extract some fields, insert them into a DB2 table, and create a sequential file called NFN in their environment. They need to generate a unique key to be both in the DB2 table and in the NFN file. Then they process the NFN file to add or transform some data. At the end, they need to join the new NFN file to the DB2 table to create another XML file. See Figure 1 for a view of the overall architecture of this benchmark testing.

Listing 1. Example of XML instance document (Pacs.008.001.02_4.xml from ISO 2022 website)
<?xml version="1.0" encoding="UTF-8"?>
<Document xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xmlns="urn:iso:std:iso:20022:tech:xsd:pacs.008.001.02">
	<FIToFICstmrCdtTrf>
		<GrpHdr>
			<MsgId>EEEE/100929-EUR/059</MsgId>
			<CreDtTm>2010-09-29T09:00:00</CreDtTm>
			<NbOfTxs>1</NbOfTxs>
			<SttlmInf>
				<SttlmMtd>CLRG</SttlmMtd>
				<ClrSys>
					<Prtry>XYZ</Prtry>
				</ClrSys>
			</SttlmInf>
			<InstgAgt>
				<FinInstnId>
					<BIC>EEEEDEFF</BIC>
				</FinInstnId>
			</InstgAgt>
			<InstdAgt>
				<FinInstnId>
					<BIC>DDDDBEBB</BIC>
				</FinInstnId>
			</InstdAgt>
		</GrpHdr>
		<CdtTrfTxInf>
			<PmtId>
				<InstrId>EEEE/100929-EUR/059/1</InstrId>
				<EndToEndId>ABC/ABC-13679/2010-09-15</EndToEndId>
				<TxId>BBBB/100928-CCT/EUR/912/1</TxId>
			</PmtId>
			<PmtTpInf>
				<InstrPrty>NORM</InstrPrty>
			</PmtTpInf>
			<IntrBkSttlmAmt Ccy="EUR">499250</IntrBkSttlmAmt>
			<IntrBkSttlmDt>2010-09-29</IntrBkSttlmDt>
			<InstdAmt Ccy="EUR">500000</InstdAmt>
			<ChrgBr>CRED</ChrgBr>
			<ChrgsInf>
				<Amt Ccy="EUR">500</Amt>
				<Pty>
					<FinInstnId>
						<BIC>BBBBUS33</BIC>
					</FinInstnId>
				</Pty>
			</ChrgsInf>
			<Dbtr>
				<Nm>ABC Corporation</Nm>
				<PstlAdr>
					<StrtNm>Times Square</StrtNm>
					<BldgNb>7</BldgNb>
					<PstCd>NY 10036</PstCd>
					<TwnNm>New York</TwnNm>
					<Ctry>US</Ctry>
				</PstlAdr>
			</Dbtr>
			<DbtrAcct>
				<Id>
					<Othr>
						<Id>00125574999</Id>
					</Othr>
				</Id>
			</DbtrAcct>
			<DbtrAgt>
				<FinInstnId>
					<BIC>BBBBUS33</BIC>
				</FinInstnId>
			</DbtrAgt>
			<CdtrAgt>
				<FinInstnId>
					<BIC>DDDDBEBB</BIC>
				</FinInstnId>
			</CdtrAgt>
			<Cdtr>
				<Nm>GHI Semiconductors</Nm>
				<PstlAdr>
					<StrtNm>Avenue Brugmann</StrtNm>
					<BldgNb>415</BldgNb>
					<PstCd>1180</PstCd>
					<TwnNm>Brussels</TwnNm>
					<Ctry>BE</Ctry>
				</PstlAdr>
			</Cdtr>
			<CdtrAcct>
				<Id>
					<IBAN>BE30001216371411</IBAN>
				</Id>
			</CdtrAcct>
			<Purp>
				<Cd>GDDS</Cd>
			</Purp>
			<RmtInf>
				<Strd>
					<RfrdDocInf>
						<Tp>
							<CdOrPrtry>
								<Cd>CINV</Cd>
							</CdOrPrtry>
						</Tp>
						<Nb>ABC-13679</Nb>
						<RltdDt>2010-09-15</RltdDt>
					</RfrdDocInf>
				</Strd>
			</RmtInf>
		</CdtTrfTxInf>
	</FIToFICstmrCdtTrf>
</Document>
Figure 1. Overall architecture of benchmark
shows flow from table init, to table split, to creation of sequential files, to table NFN, to table join

Achieving the performance goal

The goal is to compute 4 million transactions (that is, 4 million <CdtTrfTxInf> elements) within one hour.


Step 1: Decomposition

The initial XML document is inserted into an XML column in a table (Table INIT). In this step, we needed to extract 4 fields under <GrpHdr> element, 6 fields under <CdtTrfTxInf> elements, and each <CdtTrfTxInf> element itself from each XML document, and then insert into another table (Table SPLIT). For XML documents with multiple <CdtTrfTxInf>elements, there would be repeated <GrpHdr> fields.

We made the following attempts.

First attempt:

Listing 2. First attempt for decomposition
INSERT INTO SPLIT (
                   MSGID        ,                   
	       	 INTRBKSTTLMDT    ,
                   ENDTOENDID    ,
                   TXID        ,
                   INTRBKSTTLMAMT    ,
                   DBTRACCT_IBAN    ,
                   DBTRAGT_BIC    ,
                   CDTRAGT_IBAN    ,
                   CDTRAGT_BIC    ,
                   CANAL          ,
                   CREDTTM        ,
                   DOC
           		)
SELECT
               S.MsgId        ,
               S.INTRBKSTTLMDT    ,
               S.EndToEndId    ,
               S.TxId        ,
               S.IntrBkSttlmAmt,
               S.DbtrAcct_IBAN    ,
               S.DbtrAgt_BIC    ,
               S.CdtrAgt_IBAN    ,
               S.CdtrAgt_BIC    ,
               S.CANAL              ,
               S.CREDTTM            ,
               XMLDOCUMENT(S.DOC)
FROM INIT ,
XMLTABLE(
  XMLNAMESPACES(DEFAULT
  'urn:iso:std:iso:20022:tech:xsd:pacs.008.001.02') ,
  '/Document/FIToFICstmrCdtTrf/CdtTrfTxInf'
  PASSING  INIT.DOCUMENT
  COLUMNS
  MsgId        	VARCHAR(35) PATH '../GrpHdr/MsgId' ,
  IntrBkSttlmDt   DATE   	PATH '../GrpHdr/IntrBkSttlmDt' ,
  EndToEndId    	VARCHAR(35) PATH 'PmtId/EndToEndId' ,
  TxId        	VARCHAR(35) PATH 'PmtId/TxId' ,
  IntrBkSttlmAmt  VARCHAR(17) PATH 'IntrBkSttlmAmt' ,
  DbtrAcct_IBAN   VARCHAR(34) PATH 'DbtrAcct/Id/IBAN' ,
  DbtrAgt_BIC    	VARCHAR(11) PATH 'DbtrAgt/FinInstnId/BIC',
  CdtrAgt_IBAN    VARCHAR(34) PATH 'CdtrAcct/Id/IBAN' ,
  CdtrAgt_BIC    	VARCHAR(11) PATH 'CdtrAgt/FinInstnId/BIC',
  CANAL 		VARCHAR(35) PATH '../GrpHdr/SttlmInf/ClrSys/Prty'
     DEFAULT 'CLRG' ,
  CreDtTm        	TIMESTAMP   PATH '../GrpHdr/CreDtTm' ,
  DOC        	XML        	PATH '.'
 ) AS S

This is the most direct approach using one XMLTABLE function.

Second attempt:

In our second attempt, we split the statement to use two XMLTABLEs to avoid parent axis.

The first had all of the header information. Then we joined it to the CdtTrfTxInf info. In DB2 V9, shredding 50MB finished in about 30 seconds. In DB2 V10, it finished in 19 seconds. The performance was much better than our first attempt above.

Listing 3. Second attempt for decomposition
INSERT INTO SPLIT (
                   SEPAID       ,
                   MSGID        ,
                   INTRBKSTTLMDT    ,
                   ENDTOENDID    ,
                   TXID        ,
                   INTRBKSTTLMAMT    ,
                   DBTRACCT_IBAN    ,
                   DBTRAGT_BIC    ,
                   CDTRAGT_IBAN    ,
                   CDTRAGT_BIC    ,
                   CANAL          ,
                   CREDTTM        ,
                   DOC
           )
SELECT
            TI.Id           ,
            GH.MsgId ,
            GH.IntrBkSttlmDt ,
            S.EndToEndId ,
            S.TxId  ,
            S.IntrBkSttlmAmt ,
            S.DbtrAcct_IBAN ,
            S.DbtrAgt_BIC ,
            S.CdtrAgt_IBAN ,
            S.CdtrAgt_BIC ,
            GH.CANAL      ,
            GH.CreDtTm  ,
            XMLDOCUMENT(S.DOC)
FROM INIT TI,
  XMLTABLE(
     XMLNAMESPACES(DEFAULT
     'urn:iso:std:iso:20022:tech:xsd:pacs.008.001.02'),
     '$D/Document/FIToFICstmrCdtTrf/GrpHdr'
      passing TI.DOCUMENT AS D
      COLUMNS
      MsgId  	  VARCHAR(35) PATH 'MsgId',
      IntrBkSttlmDt DATE  	  PATH 'IntrBkSttlmDt',
      CANAL         VARCHAR(35) PATH 'SttlmInf/ClrSys/Prty'
							DEFAULT 'CLRG' ,
      CreDtTm       TIMESTAMP   PATH 'CreDtTm') AS GH,
  XMLTABLE(
      XMLNAMESPACES(DEFAULT
      'urn:iso:std:iso:20022:tech:xsd:pacs.008.001.02'),
      '$D/Document/FIToFICstmrCdtTrf/CdtTrfTxInf'
       passing TI.DOCUMENT AS D
       COLUMNS
       EndToEndId     VARCHAR(35) PATH 'PmtId/EndToEndId',
       TxId  	    VARCHAR(35) PATH 'PmtId/TxId',
       IntrBkSttlmAmt DECIMAL(17,5) PATH 'IntrBkSttlmAmt',
       DbtrAcct_IBAN  VARCHAR(34)   PATH 'DbtrAcct/Id/IBAN',
       DbtrAgt_BIC    VARCHAR(11)   PATH 'DbtrAgt/FinInstnId/BIC',
       CdtrAgt_IBAN   VARCHAR(34)   PATH 'CdtrAcct/Id/IBAN',
       CdtrAgt_BIC    VARCHAR(11)   PATH 'CdtrAgt/FinInstnId/BIC',
       DOC  	    XML  		PATH '.'          
) AS S

Step 2: Creating the NFN file

For each XML input document, after decomposition, we needed to use the decomposed data to create a sequential file called NFN. The format of NFN specified a starting position and the length of each field. This was hard to achieve using UNLOAD since it was difficult to specify the starting position of each field that UNLOAD writes into. We used DSNTIAUL to extract the data from the DB2 table and write to the required position in the sequential file (NFN).

Listing 4. SELECT statement in DSNTIAUL
SELECT                            
  CHAR(CHAR(OPID), 20),          
  CHAR(MSGID, 35),               
  CHAR(CHAR(INTRBKSTTLMDT), 10),
  CHAR(ENDTOENDID, 35),           
  CHAR(TXID, 35),                
     SUBSTR(CHAR(INTRBKSTTLMAMT),1, 17),
  CHAR(DBTRACCT_IBAN, 34),       
  CHAR(DBTRAGT_BIC, 11),         
  CHAR(CDTRAGT_IBAN, 34),        
  CHAR(CDTRAGT_BIC, 11),         
  CHAR(CANAL, 35)                
     SUBSTR(CHAR(CREDTTM),1,19)     
FROM DMCBEN.TABLESPLIT;

Step 3: Processing the NFN file

In this step, each entry in the NFN file was processed via an internal customer process or program. Some entries might be deleted, and the DBTRAGT_BIC field might be changed.


Step 4: Inserting the modified NFN back to DB2

A COBOL application was created to read from the sequential file (NFN) and insert back to a DB2 table (table NFN). LOAD was not used here since we needed to use following logic to retrieve a unique sequence number (NFNID) and insert with rest of data from the same NFN file. In this way, we could easily trace back to see whether the rows in the table came from the same NFN file or not.

 SELECT NEXT VALUE
 FOR XMLMAD_SEQ
 INTO :HV-NFNID
 FROM SYSIBM.SYSDUMMY1

Step 5: Creating the final XML document for each “CANAL”

This was the most complicated step. We needed to create an XML document from two different tables, table NFN and table Split, with the following requirements shown in Table 1:

Table 1. Requirements for "Header" in the output
XML tagValue
<GrpHdr>/
<MsgId> </MsgId>MsgId in table Split
<CreDtTm> </CreDtTm> Current Date
<NbOfTxs> </NbOfTxs> Calculation – number of transactions in the final document
<TtlIntrBkSttlmAmt Ccy="EUR">
</TtlIntrBkSttlmAmt>
Calculation – Sum of the flat file(NFN file) zones IntrBkSttlmAmt
<IntrBkSttlmDt></IntrBkSttlmDt> Value found in the flat file(NFN file) record zone IntrBkSttlmDt
<SttlmInf> /
<SttlmMtd></SttlmMtd> Fixed Value : CLRG
<ClrSys> /
<ClrSysId></ClrSysId> Value found in the flat file(NFN file) record zone «canal »
</ClrSys> /
</SttlmInf> /
</GrpHdr> /
Table 2. Requirement for "Transaction" in the output
XML dataFrom
All data except <CdtTrfTxInf><DbtrAgt> <FinInstnId><BIC> zone Corresponding data in Table Split
<CdtTrfTxInf><DbtrAgt> <FinInstnId><BIC> DbtrAgt_BIC in NFN file

Listing 5 shows the statement we used to achieve this.

Listing 5. Merge statement for both DB2 V9 and V10
SELECT
XMLDOCUMENT (XMLELEMENT(NAME "Document",
  XMLNAMESPACES('http://www.w3.org/2001/XMLSchema-instance'
           as "xsi", DEFAULT
           'urn:iso:std:iso:20022:tech:xsd:pacs.008.001.02'),
  XMLATTRIBUTES(
           'urn:iso:std:iso:20022:tech:xsd:pacs.008.001.02'
           as "xsi:schemaLocation"),
  XMLELEMENT(NAME "FIToFICstmrCdtTrf",
    XMLELEMENT(NAME "GrpHdr",
           XMLELEMENT(NAME "MsgId", IMP.MSGID),
           XMLELEMENT(NAME "CreDtTm", current timestamp),
           XMLELEMENT(NAME "NbOfTxs", count(*)),
           XMLELEMENT(NAME "TtlIntrBkSttlmAmt",
             XMLATTRIBUTES('EUR' as "Ccy"),
             DECIMAL(SUM (IMP.IntrBkSttlmAmt),18,2)),
           XMLELEMENT(NAME "IntrBkSttlmDt", IMP.IntrBkSttlmDt),
           XMLELEMENT(NAME "SttlmInf",
             XMLELEMENT(NAME "SttlmMtd", 'CLRG'),
             XMLELEMENT(NAME "ClrSys",
               XMLELEMENT(NAME "ClrSysId", IMP.Canal))
                        )
        	  ),
    XMLAGG(
      CASE WHEN(EXP.DbtrAgt_BIC=IMP.DbtrAgt_BIC) THEN
              EXP.DOC
      ELSE (
       SELECT XMLELEMENT(NAME "CdtTrfTxInf",
        XMLNAMESPACES(
        'http://www.w3.org/2001/XMLSchema-instance' as "xsi",
        DEFAULT 'urn:iso:std:iso:20022:tech:xsd:pacs.008.001.02'),
        XMLAGG(
          CASE WHEN X.NAME = 'DbtrAgt'
          THEN XMLELEMENT(NAME "DbtrAgt",
            (SELECT
               XMLAGG(
                 CASE WHEN Y.SEQNUM = 1
                 THEN XMLELEMENT(NAME "FinInstnId",
                   (SELECT
                      XMLAGG(
                        CASE WHEN Z.SEQNUM = 1
                        THEN XMLELEMENT(NAME "BIC",
                          (SELECT IMP2.DBtrAgt_BIC
                           FROM @@D000.XMLMAD IMP2
                           WHERE IMP2.OPID=SDU2.EXPID) )
                        ELSE Z.NODE
                        END  )
                     FROM
                       (SELECT SDU.EXPID AS EXPID
                        FROM SYSIBM.SYSDUMMYU) SDU2,
                        XMLTABLE(XMLNAMESPACES(DEFAULT
                     'urn:iso:std:iso:20022:tech:xsd:pacs.008.001.02'),
                          '*' PASSING Y.NODE
                          COLUMNS
                          SEQNUM FOR ORDINALITY,
                          NODE   XML PATH '.') Z
                        )
                    )
                 ELSE Y.NODE
                 END )
             FROM
               (SELECT SDU0.ID AS EXPID FROM SYSIBM.SYSDUMMYU) SDU,
                XMLTABLE(XMLNAMESPACES(DEFAULT
                 'urn:iso:std:iso:20022:tech:xsd:pacs.008.001.02'),
                 '*' PASSING X.NODE
                 COLUMNS
                 SEQNUM FOR ORDINALITY,
                 NODE   XML PATH '.') Y
                ) )
          ELSE X.NODE
          END ) )
       FROM (SELECT EXP.OPID AS ID FROM SYSIBM.SYSDUMMYU) SDU0,
             XMLTABLE(XMLNAMESPACES(DEFAULT
               'urn:iso:std:iso:20022:tech:xsd:pacs.008.001.02'),
               '/CdtTrfTxInf/*' PASSING EXP.DOC
               COLUMNS
               NAME VARCHAR(20) PATH 'fn:local-name(.)',
               NODE XML  PATH '.') AS X )
      END )
  )
 )
)
FROM  NFN IMP, SPLIT EXP
WHERE IMP.OPID = EXP.OPID
      AND NFNID = :HV-NFNID
GROUP BY IMP.MSGID, IMP.IntrBkSttlmDt, IMP.Canal

Here are the major ideas behind this merge statement:

  1. Use following XMLTABLE to get all the elements under CdtTrfTxInf
    	XMLTABLE(XMLNAMESPACES(DEFAULT
                   'urn:iso:std:iso:20022:tech:xsd:pacs.008.001.02'),
                   '/CdtTrfTxInf/*' PASSING EXP.DOC
             COLUMNS
             NAME VARCHAR(20) PATH 'fn:local-name(.)',
       	   NODE XML         PATH '.') AS X
  2. Use XMLAGG to "glue" them together and reconstruct the CdtTrfTxInf element:
    	SELECT XMLELEMENT(NAME "CdtTrfTxInf",
            XMLNAMESPACES(
            'http://www.w3.org/2001/XMLSchema-instance' as "xsi",
            DEFAULT 'urn:iso:std:iso:20022:tech:xsd:pacs.008.001.02'),
            XMLAGG(….Y.NODE…)
          FROM
            (XMLTABLE(XMLNAMESPACES(DEFAULT
               'urn:iso:std:iso:20022:tech:xsd:pacs.008.001.02'),
               '*' PASSING X.NODE
               COLUMNS
               SEQNUM FOR ORDINALITY,
               NODE   XML PATH '.') Y
             )
  3. Use the CASE expression to handle the fields we are looking for (/CdtTrfTxIn/DbtrAgt/FinInstnId/BIC):
    	XMLAGG( CASE WHEN X.NAME = 'DbtrAgt'
    	             THEN XMLELEMENT(NAME "DbtrAgt")
    	        ELSE X.NODE
    	        END )

Issues

Lock escalation was found during decomposition (+ insert) in DB2 V9. For an XML input document containing 100,000 transactions, there would be 100,000 locks on the base table (table SPLIT) and 100,000 locks on the XML table (created implicitly).

The root cause of this locking issue was the fact that DB2 V9 XML design introduced a new type of lock: XML lock. The XML lock was applied per XML document and was designed to prevent the reader from looking at partially updated documents. To avoid these additional XML locks, we changed the data type of each transaction (column name DOC below) in the table SPLIT from XML to VARCHAR, since we know that each transaction is less than 4KB see Listing 7).

Listing 6. New DDL for Table SPLIT using VARCHAR for each transaction
CREATE TABLE SPLIT
    (ID                   BIGINT NOT NULL GENERATED ALWAYS
       AS IDENTITY     
         (START WITH 0, INCREMENT BY 1, CACHE 2000, NO CYCLE,
          NO ORDER, MAXVALUE 9223372036854775807, MINVALUE 0),
     MSGID                VARCHAR(35) FOR SBCS DATA  NOT NULL,
     INTRBKSTTLMDT        DATE NOT NULL,
     ENDTOENDID           VARCHAR(35) FOR SBCS DATA  NOT NULL,
     TXID                 VARCHAR(35) FOR SBCS DATA  NOT NULL,
     INTRBKSTTLMAMT       DECIMAL(18, 5) NOT NULL,
     DBTRACCT_IBAN        VARCHAR(35) FOR SBCS DATA  NOT NULL,
     DBTRAGT_BIC          VARCHAR(35) FOR SBCS DATA  NOT NULL,
     CDTRAGT_IBAN         VARCHAR(35) FOR SBCS DATA  NOT NULL,
     CDTRAGT_BIC          VARCHAR(35) FOR SBCS DATA  NOT NULL,
     CANAL                VARCHAR(35) FOR SBCS DATA  NOT NULL,
     CREDTTM              TIMESTAMP NOT NULL,
     DOC                  VARCHAR(4000))
   ;

DB2 V10 has a powerful new feature, sub-document update, which allows expressing the merge statement much more easily (see listing 7 below). Sub-document update requires the XML data type and cannot be used with VARCHAR or LOBs.

Also, DB2 V10 avoids XML locks. This application is more suited for DB2 V10 in terms of performance and functionality. All of these unusual solutions (such as the big "merge" query and techniques to avoid XML locks) are needed only for DB2 V9.

Listing 7. XMLModify statement in DB2 V10
UPDATE DMCBEN.SEPAJOIN J SET J.DOC=
XMLMODIFY
('declare default element namespace
  "urn:iso:std:iso:20022:tech:xsd:pacs.008.001.02";
 replace value of node /CdtTrfTxInf/DbtrAgt/FinInstnId/BIC with $rep',
(SELECT N.DBtrAgt_BIC FROM DMCBEN.TABLENFN N
  WHERE N.ID=J.ID) AS "rep")
WHERE J.NFNID = CAST(? AS INTEGER)
  AND EXISTS (SELECT 1 FROM DMCBEN.TABLENFN N,
                            DMCBEN.TABLESPLIT S
               WHERE N.ID = J.ID
                 AND N.ID = S.ID
		          AND N.DBtrAgt_BIC <> S.DBtrAgt_BIC)

Results

DB2 V10 met the goal: a single thread finished 4 million transactions in 50 minutes.

Using DB2 V9 for single thread, we needed a little more than one hour to process 4 million transactions.


Summary

This article showed how to:

  • Simplify application logic using pureXML.
  • Use the XMLTABLE function to decompose an XML document
  • Use XML publishing functions to construct a new XML document
  • Improve performance by re-writing the query.

Acknowledgment

Thanks to Jason Cu, Cécile Benhamou, Denis Weill, Francis Arnaudiès, and Philippe Dilain for their comments and assistance with this article.

Resources

Learn

Get products and technologies

  • Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement Service Oriented Architecture efficiently.

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management, XML
ArticleID=856807
ArticleTitle=Using DB2 for z/OS pureXML to process SEPA transactions
publish-date=01312013