If you want to learn more about XML indexes in DB2, including the XML index enhancements in DB2 10, join the DB2 pureXML devotee conference call on Wed, June 13, at 10am US Eastern Time. For further details and dial-in numbers, see:
The XML Database Blog
DB2 10 also includes a variety of useful XML enhancements. Let’s take a quick look at the most important ones here:
New XML Index Data Types: Integer, Decimal
As you know, when you create an index on an XML element or attribute you must specify the data type that DB2 should use to interpret and index the specifies nodes. The reason is that the usage of XML Schema, which might define data types, is optional. And even if an XML Schema was used, it might not define a data type for each and every element and attribute.
Before DB2 10, an XML index for numeric values is specified with the data type DOUBLE, so that any type of numeric values can be indexed. In DB2 10 you can also specify the data types INTEGER and DECIMAL to define a numeric XML index.
If you know that a certain element or attribute can only contain INTEGER (or DECIMAL) values, then you can improve query performance by creating the XML index with the most appropriate type. For example, assume that dept is a table that has an XML column deptdoc. Then you can define the following indexes in DB2 10:
create unique index idx1 on dept(deptdoc) generate key using
create index idx2 on dept(deptdoc) generate key using
You don’t need to change existing queries to use these new indexes!
XML Indexes for Case-Insensitive Search
DB2 10 allows you to define case-insensitive indexes and search very efficiently for string values regardless of their upper case or lower case spelling. For example, to enable case-insensitive search of employee names, you could create the following index:
create unique index idx1 on dept(deptdoc) generate key using
This index stores all key values in upper-case. To find employees with names such as Peter Pan, PETER PAN, Peter PAN, pEtEr pAn, etc. you could write the following query that can use the index above:
Index support for Prefix Search
Previously, the following query with the fn:starts-with() function could not use an XML index to find all employees whose name starts with “Jo”:
In DB2 10, this query can benefit from existing XML indexes of type VARCHAR(n) that may exist on the <name> element.
XML Indexes for Existential (Structural) Predicates
An existential predicate, sometimes called structural predicate, is a predicate that checks the existence of an XML element or attribute in an XML document, regardless of its value.
For example, imagine you want to find all documents that contain a <comment> element for an employee. One of the following queries comes to mind:
To support such queries, DB2 10 allows you to create an XML index with the fn:exists function, like this:
create unique index idx1 on dept(deptdoc) generate key using
This index must be defined as VARCHAR(1) and each index key
represents true or false to indicate the existence of the
<comment> element in a given document.
Friendly XML query behavior
DB2 10 also includes several enhancements that make XML queries behave in a more friendly manner. Let’s look at just one of several cases. For example, consider the following query that has a numeric predicate on the @id attribute:
What happens if this query encounters an XML document where the @id
attribute has a non-numeric string value such as “WW Sales”? In DB2 9.x
this situation would cause a type error at run-time and the query would
fail and abort with an error message. Although this behavior was
compliant with the language standard, it is not very useful. If you look
for the department with id = 15, any department with non-numeric id can
never be a match. Therefore, DB2 10 simply evaluates the predicate to
FALSE in this case, i.e. ignores the document with the non-numeric id,
and continues processing the query.
Binary XML client/server communication
With DB2 10, Java applications can choose the format in which XML is transmitted between the application and the DB2 server. You can choose between traditional textual XML and a new binary XML format that is a compressed on-the-wire format for XML.
On the application side, the encoding and decoding of XML to and from the binary format is performed by the JDBC driver (JCC 4.9 or higher). The JDBC driver can convert the binary XML format to and from a text, DOM, SAX, or StAX representation. The use of binary XML provides the most notable performance gain for Java applications that already consume or produce XML in SAX format.
Applications can exchange XML with the DB2 server (through inserts, queries, etc.) and completely avoid the serialization of XML to text, i.e. neither the client nor the server needs to deal with textual XML for the purpose of insert or retrieving XML. Eliminating the parsing and serialization of textual XML can enable significantly better end-to-end performance.
Other IBM products that support this binary XML communication format
include DB2 10 for z/OS, the WebSphere Application Server XML Feature
Pack, and the WebSphere Datapower appliance.
Query Performance Enhancements
DB2 10 also includes a number of performance improvements for common
types of XML queries, including many types of queries that use the
What is an XML Schema? Some of you may already know this, others don’t. So before I’m going to share some more technical information about XML Schemas in subsequent blog posts, I better get some of the basics out of the way first.
When you process and manage information in XML format, you can choose to use an XML Schema with your XML documents. Roughly speaking, an XML Schema can be used to define what you want your XML documents to look like. For example, in an XML Schema you can define:
If you choose to create an XML Schema, it may define just some or all of the aspects listed above. The designer of the XML Schema can choose the degree to which the schema constraints the characteristics of the XML documents. For example, an XML Schema can be very loose and define only a few key features for your XML documents and allow for a lot of flexibility. Or it can be very strict to tightly control the XML data in every aspect. Or anything in between.
The use of an XML Schema is optional, i.e. an XML Schema is not required to store, index, query, or update XML documents. However, an XML Schema can be very useful to ensure that the XML documents that you receive or produce are compliant with certain structural rules that allow applications to process the XML. In other words, XML Schemas help you to enforce data quality.
If an document complies with a given XML Schema, then the document is said to be valid for this schema. A document might be valid for one schema but invalid for another schema. The process of testing an XML document for compliance with an XML Schema is called validation.
When an XML document is parsed by an XML parser, validation can be enabled as an optional part of the parsing process. Full validation of an XML document always requires XML parsing. For many documents and schemas, validation typically incurs only a small delta cost (in terms of CPU usage) on top of the cost of XML parsing.
What does an an XML Schema look like?
An XML Schema itself is an XML document! But, a very special document that needs to comply with very specific rules that are defined by -you guessed it!- another XML Schema, i.e. the schema for schemas.
Large XML schemas can consist of multiple schema documents that reference each other through import and include relationships. This allows you to compose an XML Schema out of smaller building blocks in a modular fashion.
I don’t want to go into the syntax details of XML Schemas here, but there are some useful resources available:
When and why should I use an XML Schema?
Simply put, if you want to ensure data quality and detect XML documents that do not comply with an expected format, use an XML Schema and validate each document!
However, what if XML documents pass through multiple components of your IT infrastructure, such as a message queue, an application server, an enterprise service bus, and the database system? If these components do not modify the XML but merely read & route it, examine whether all of these components need to validate each document. For example, if the application server has already validated a document before insertion into a DB2 database, does the document need to be validated again in DB2? Maybe not, if you trust the application layer. Maybe yes, if you don’t.
An XML Schema is also often used as a “contract” between two or more parties that exchange XML documents. With this contract the parties agree on a specific format and structure of the XML messages that they send and receive, to ensure seamless operation.
Practically every vertical industry has defined XML Schemas to standardize XML message formats for the data processing in their industry. A good overview is given by the introduction of this article:
“Getting started with Industry Formats and Services with pureXML”: http://www.ibm.com/developerworks/data/library/techarticle/dm-0705malaika/
How can I validate XML documents in DB2?
Simple. First, you register one or multiple XML Schemas in the DB2 database. This can be done with CLP commands, stored procedures, or through API calls in the JDBC or .NET interface to DB2. After a schema is registered in DB2, you can use it to validate XML documents in DB2, typically when you insert, load, or update XML documents. You can enforce a single XML Schema for all XML documents in an XML column, or you can allow multiple XML Schemas per column. A database administrator can force automatic validation upon document insert, or allow applications to choose one of the previously registered schema for validation whenever a document inserted.
And… validation can also be done in SQL statements?
Yup. The SQL standard defines a function called XMLVALIDATE, which can be used for document validation in INSERT statement, UPDATE statements, triggers, stored procedures, and even in queries.
Here is a simple example of an INSERT statement that adds a row to a customer customer table, which consists of an integer ID column and an XML column called “doc”:
INSERT INTO customer(id, doc)
The id and the document are provided by parameter markers “?”, and the XMLVALIDATE function that is wrapped around the second parameter ensures validation against the XML Schema that has been regoistered under the identifier db2admin.custxsd.
If the inserted document is not compliant with the XML Schema, the INSERT statement fails with an appropriate error message. Similarly, the XMLVALIDATE function can also be used in the right-hand side of the SET clause of an UPDATE statement that modifies or replaces an XML document.
Ok, so much for now. In my next blog post we’ll go into more detail.
MatthiasNicola 120000E28R Tags:  pathtable sql xml document recursive paths xquery recursion 3,333 Views
A common question is how to obtain a list of all the elements and attributes that occur in an XML document. Producing such a list is what I call “XML profiling” and in a previous blog post I have discussed several SQL/XML queries that can do this.
An extension of this question is how to get the paths of all the elements and attributes in a document. This seemingly simple task is -unfortunately- not nearly as simple as one would think! XPath and XQuery do not have a function that takes a given element or attribute as input and returns the full path to that node.
The solution is to write a query that traverses the XML document level by level to collect the element names at every level and concatenate them appropriately to construct the paths for every elements and attributes at every level.
There are many ways in which this can be done. You can use XQuery or SQL/XML and you can choose whether to use recursion or not. Let’s look at a few examples.
First, let’s create a simple table with a small document that we can use in the examples:
create table mytable(xmldoc XML);
A first and straightforward solution is to start at the root of the document, then at the first level of child nodes, and then at the children of each these child nodes, and so on. For each element or attribute we construct the path by concatenating the path from the parent with the name of the element or attribute. We do this for all nodes at a given level in the tree and then move to the next level of the document.:
The obvious shortcoming of this query is that it assumes a maximum of 5 levels in the document. If your documents are deeper than this, you can easily extend the query so that it goes down to 10 or 20 levels, whatever you need. That’s maybe not very elegant, but it works if you can define an upper bound on the depths of your XML documents, which is usually possible.
You probably notice that the path Message/Person/id should actually be Message/Person/@id because “id” is an XML attribute. The query can enhanced to take care of such details. In the last two sample queries of my XML profiling post you have seen how to use the
If you prefer a more elegant solution that does not require any assumption about the maximum depths of the XML documents, then you need to code a recursive query, either in XQuery or in SQL/XML. Let’s try SQL/XML for a change.
You may already be familiar with how recursive SQL works. If not, you can look at several existing examples. The basic idea is to use a WITH clause, also called “common table expression”, that contains a UNION ALL between the start of the processing and a recursive reference back to the common table expression itself. The following augments this approach with the XMLTABLE function that extracts nodes and node names from the XML:
WITH pathstable (name, node, xpath) AS (
If you want to list the element and attribute values for each path, then you can easily modify this query as follows:
WITH pathstable (name, node, xpath, value) AS (
A few things to note:
MatthiasNicola 120000E28R Tags:  normalization xml rdf performance json business records relational 1,987 Views
Part 2 of our article “Data normalization reconsidered” is now available at
The second part discusses alternative to a traditional normalized relation representation of data. Such alternatives include for example XML, JSON, and RDF because they can often help you overcome normalization issues or improve schema flexibility, or both. In the 21st century, digitized business records are often created in XML to begin with, which makes XML an attractive choice as the database level storage format.
This article also contains a performance comparison between XML and relational data that was conducted for a real-world application scenario at a major international logistics company.
At the end of the article you find comparison tables that summarize the pros and cons of different data representations.
MatthiasNicola 120000E28R Tags:  xml normalization schema normal forms 3nf relational 2 Comments 3,913 Views
Normalization is a design methodology for relational database schemas and aims to minimize data redundancy and avoid data anomalies, such as update anomalies. The consequence of normalization is that business records (such as a purchase order, an insurance claim, a financial transaction, etc.) are split into pieces that are scattered over potentially many relational tables.
In addition to its benefits, normalization also introduces several drawbacks:
These issues raise the question whether normalization should be applied as categorically as some people believe. Indeed, there are several reasons for reconsidering normalization, such as:
Today, business records are often created and exchanged in a digital format, and this format is often XML. XML is a non-normalized data format that can provide several benefits:
When business records already exist in XML format outside the database anyway, then it is usually best to also store them as XML and not to convert into a normalized relational schema.
My colleague Susan Malaika and I have collected our thoughts and observations on normalization in a 2-part article titled “Data Normalization Reconsidered“. It has recently been published on developerWorks and can be found here:
Part 2: http://www.ibm.com/developerworks/data/library/techarticle/dm-1201normalizationpart2/index.html
If you think that mainframe computers are old dinosaurs that only run ancient COBOL code – think again! Now mainframes also run XQuery!
I have put the word “only” in quotes because for many applications XPath and SQL/XML are fully sufficient. When you combine XPath expressions with SQL/XML functions such as XMLTABLE plus other SQL language constructs you can write very powerful XML queries and accomplish many of the same things that you can do with XQuery.
DB2 10 for z/OS has added a variety of new XML features such as node-level XML updates, XML-type parameters and variables in stored procedures and user-defined functions, and enhancements for XML Schemas and XML indexes.
With APARs PM47617 and PM47618, DB2 for z/OS now also supports XQuery within the SQL functions XMLTABLE, XMLQUERY, XMLEXISTS, and XMLMODIFY.
So what are the additional capabilities and benefits that XQuery provides? Examples include:
To see sample queries for each of the above, continue reading here:
MatthiasNicola 120000E28R Tags:  benchmark tpox xml transactions workload driver java jdbc performance 2,515 Views
First, what is TPoX? I have two answers to that question.
TPoX, short for “Transaction Processing over XML”, is an XML database benchmark that executes and measures a multi-user XML workload. The workload contains XML queries (70%) as well as XML insert, update, and delete operations (30%). TPoX simulates a simple financial application that issues XQuery or SQL/XML transactions to stress XML storage, XML indexing, XML Schema support, XML updates, logging, concurrency and other components of an XML database system.
The TPoX package contains:
TPoX is a very flexible and extensible tool for performance testing of relational databases, XML databases, and other systems. For example, if you have a populated relational database you can use the TPoX workload driver to parameterize, execute, and measure plain old SQL transactions with hundreds of simulated database users. I have used TPoX for a lot of relational performance testing, because it’s so easy to setup and measure concurrent workloads. The workload driver reports throughput, min/ax/avg response times, percentiles and confidence intervals for response times, and other useful metrics. Oh, and by the way, TPoX happens to include an XML data generator and a set of sample XML transactions, in case you’re interested in XML database performance.
In the latest release, TPoX 2.1, we have further enhanced the extensibility of the TPoX Workload Driver. The XML data and XML transactions are still the same.
Some of the enhancements in TPoX 2.1 include:
We have already found these extensions extremely valuable for some of our own performance testing, and we’re happy to share them. You can download TPoX 2.1 (free, open source) and find more detailed information in the release notes as well as the TPoX documentation that is included in the download.
MatthiasNicola 120000E28R Tags:  vegas conference 2011 hands-on purexml on las iod db2 demand lab information xml presentations 2,553 Views
The annual Information on Demand conference in Las Vegas (Oct 23-27, 2011) is always a great venue to get updated on the latest topics in data management. The IOD conference offers a broad range of technical sessions, business sessions, hands-on labs, and lots of networking opportunities. More than 200 customer speakers will present their first-hand experiences with IBM Software.
The conference is in the Mandalay Bay conventation center. Various topics around DB2 pureXML are covered in presentations and hands-on labs. Some of them are listed here:
2088A – DB2 pureXML for Beginners
1120B – DB2 pureXML: Develop Your Application Prototype in Minutes, not Days
1930A – DB2 for z/OS SOA Solutions Powered by pureXML and DataPower
2691A – Architecting with IBM pureXML and Temporal Data in DB2 10 for z/OS
2087B – Get Connected: Publishing Relational Data as XML Messages
If you’re going to IOD, don’t miss these sessions!
MatthiasNicola 120000E28R Tags:  join xmltable flwor xml db2 xquery relational sql/xml. purexml 3,162 Views
If you look at existing documentation and examples of the XMLTABLE function, you find that most examples use simple XPath expressions inside the XMLTABLE function and rarely more complex XQuery expressions such as FLWOR expressions. This is also true for my 2 articles on XMLTABLE:
XMLTABLE By Example, Part1 and Part2:
Why is that? Is it because XQuery expressions other than plain path expressions are never required in the XMLTABLE function? Well, let’s take a look at that question…
First, I do believe that many typical SQL/XML queries can indeed be coded with just regular XPath in the XMLTABLE function. I have seen that this is true in many real application deployments.
But, sometimes it can certainly be useful to embed more complex XQuery expressions than just XPath in an XMLTABLE function. For example, in my blog post “How to get a list of all elements and attributes” you saw that I used the XQuery comma operator to construct a sequence from two expressions. In that case the comma expression was //(*,@*) to get all elements and all attributes. In the same post you also saw the use of the XQuery if-then-else expression in the column definition of the XMLTABLE function.
Using FLWOR expression in the XMLTABLE function can be useful if you join and combine XML from two different XML columns. For example let’s consider the following two tables with te subsequent sample documents:
CREATE TABLE order(orderdetails XML);
The order table contains one XML document per order, and the product table one XML document per product. Each product has a @pid attribute as a unique identifier, and orders have /order/item/pid elements to specify which products have been ordered. Naturally, this allows us to perform a join between orders and products.
For example, the following query is a join to retrieve the product information for all the products that have been ordered in order 123:
This query returns one row (one XML document) for each product that is referenced in order 123.
Now, what if you want to combine data from an order document and the associated product documents, and return this information in one relational row per product? For example, assume you want rows that show the product ID and product weight (from the product documents) as well as the price and quantity of that product in the order document.
In that case you could use an XQuery FLWOR expression to perform the join and combine the product and order information in the XMLTABLE function, as shown in the following query.
Since the product and order tables appear in the FROM clause of the query, the FLWOR expression references their XML columns through the variables $DESCRIPTION and $ORDERDETAILS. The return clause of the FLWOR expression constructs XML fragments that combine the desired elements and attributes from each matching pair of product and order documents. The constructed <result> elements are then input to the COLUMNS clause where they are broken up into relational columns.
It is worthwhile noting that you can produce the same result set without the use of FLWOR expressions, as shown in the next query. This query uses two XMLTABLE function, one for the desired order information and one for the desired product information. The key trick is that the two XMLTABLE function are joined on the product ID using the predicate [@pid = $p]. Without this predicate the two XMLTABLE functions would produce a Cartesian product, which is not desired.
SELECT T2.prodid, T2.weight, T1.qty, T1.price
The join predicate between the two XMLTABLE functions can also be placed in the SQL WHERE clause:
SELECT T2.prodid, T2.weight, T1.qty, T1.priceHowever, this query applies the join predicate after the XMLTABLE functions have produced their rows, which can be slower than the previous query where the predicate was in the XMLTABLE function itself.
You can find many more examples of the XMLTABLE function, XML joins, and joins between XML and relational columns in Chapter 9 of the DB2 pureXML Cookbook.