Application logging using solidDB, WebSphere Message Broker, and DB2 pureXML

How to set up an infrastructure for application logging

Because of its flexibility and its presence in many application environments, XML is becoming the preferred log format. This tutorial shows how to easily set up an application logging environment by bringing the XML features of IBM® WebSphere® and IBM DB2® together. This tutorial first introduces a simple application logging scenario, then shows how to use IBM solidDB® to cache and decouple the application from the logging infrastructure. Use WebSphere Message Queue and WebSphere Message Broker to reliably move the log messages from the application to DB2, where they can be stored and queried using the pureXML® functionality. To normalize different message formats, use WebSphere Message Broker to transform log files to other XML formats prior to storing them in DB2.

Daniel G. Schall, IBM Intern, IBM

Daniel SchallDaniel Schall studies applied computer science at the University of Kaiserslautern, Germany. Until Mar 2009, he was an intern at IBM, working with Henrik Loeser on Application Logging. Currently, he is working on his diploma thesis.



Dr. Henrik Loeser, pureXML Storage Architect for DB2 on Linux, UNIX, and Windows, IBM

Henrik Loeser Henrik Loeser is the pureXML Storage Architect for DB2 on Linux, UNIX, and Windows, living in Germany at the Lake of Constance since Spring 2008. Before that he was the development manager and technical lead for DB2 pureXML Storage and Index in IBM's Silicon Valley Laboratory where he enjoyed 7 years in sunny California as DB2 developer. Henrik speaks frequently at conferences, at customer sites and user group meetings. He holds both a Ph.D. and M.Sc. in Computer Science from the University of Kaiserslautern, Germany, and sold his first software at age 15. Henrik can be reached at hloeser@de.ibm.com, his blog is at http://blog.4loeser.net.



07 May 2009

Also available in Chinese

Before you start

About this tutorial

In this tutorial, learn how to couple several IBM products to create an infrastructure for application logging. Use IBM solidDB — a fast, in-memory database — as a cache on the application side to decouple the application from the logging infrastructure. Use WebSphere Message Queue (MQ) to persistently store and transfer messages to WebSphere Message Broker (WMB), where you can analyze and transform the messages into different XML output formats. Finally, store the messages in DB2 for Linux®, UNIX®, and Windows®. The pureXML capabilities make it possible to store the log files in their native XML format and later query and analyze the logs.

Objectives

This tutorial introduces the challenges of application logging, how to use XML in this context, and how to set up an infrastructure that brings application logging into your business.

In this tutorial, learn how to work with IBM tools, including IBM solidDB, WebSphere Message Broker, and DB2 for Linux, UNIX, and Windows.

Prerequisites

This tutorial is written for users whose skills and experience are at a beginning to intermediate level. You should have a general familiarity with installing and using software, especially DB2, WebSphere MQ, WebSphere Message Broker, and solidDB.

System requirements

To set up the infrastructure introduced in this tutorial, you need a Windows box (Server 2003, Vista, or Server 2008) with at least 2GB of free disk space, full administrator access to the box, and the ability to reboot the box several times a day. You should not use a production server machine, but a dedicated box where you can safely play around.


Application logging

With legislative changes, like the introduction of the Sarbanes-Oxley-Act, and its need for detailed logging of activities, as well as recent economic changes, like service orientation and on-demand business, keeping track of who is doing what within enterprises and therefore application logging is becoming more important.

Logging is no longer a feature used just for debugging when something goes wrong inside an application, but is a permanent process to make all transactions traceable and accountable. In business-critical applications, like customer databases or ATM terminals, logging is a vital requirement to keep track of all events. Therefore, logs have to be stored reliably and made searchable.

XML is at the core of SOA and Web services. Moreover, it is flexible and thus ideal for log messages where information, new log types, and applications may be added over time.

Customers are usually distinguishing between at least two different types of logs: technical logs that capture environment information (what machine, which OS, and so on), and functional logs, which capture what is done. Both log types can also be mixed to a single structure.

Logs of both types contain a lot of information; some parts are business-critical, while other parts are just informal. Usually, many (rather small) log files are produced — one for every operation or step an application performs. Therefore, tens of millions or even more log files per month can accumulate in a single enterprise. Despite the amount, all files need to be processed efficiently, accurately, and without loss. In addition, client applications must not be impacted by log file processing.

Assuming one log file is between 1KB and 20KB in size, and you have to deal with up to 10 million log files per day, you'll need 35GB of storage space for uncompressed data for just one day and 3TB of storage space for uncompressed data for a whole month, approximately. Since the clients generating the log files are running on lightweight and specialized hardware, they do not provide storage space for the load of log files they produce.

Therefore, you need centralized storage with large capacities, where you can store and analyze the log files. Databases have proven to be the best available storage systems for this type of task. Database management systems with the ability to natively store and query XML documents will facilitate application logging. Listing 1 shows an example XML file:

Listing 1. Sample XML file
<?xml version="1.0" encoding="US-ASCII"?>
<File>
<Record>
<Header version="1">
<Time>2002-11-15 18:19:17.6</Time>
<Type>INFORMATION</Type>
<Id>-471559096676384768</Id>
</Header>
<Application>
<Name>SecurityWebService</Name>
<Function>GetValue</Function>
<User>JDoe</User>
<Result>3171861797959368704</Result>
<Params>
<Param>
<Type>Object</Type>
<Value>Object</Value>
</Param>
<Param>
<Type>Object</Type>
<Value>security.ssl</Value>
</Param>
<Param>
<Type>Object</Type>
<Value>0</Value>
</Param>
</Params>
<CallTime>2004-11-15 16:19:17.7</CallTime>
<StartTime>2006-10-18 12:18:14.7</StartTime>
<EndTime>2000-11-16 18:14:16.4</EndTime>
<ReturnTime>2004-11-12 10:10:12.7</ReturnTime>
</Application>
<System>
<Name>INTRANET01</Name>
<State>498308015556919296</State>
</System>

[..]

</Record>
</File>

Log shipping

A centralized repository for the logs (a database system, for example) is used to integrate the activities from various applications. The data can be analyzed and the "big picture" across all applications can be created.

Having the applications insert their logs directly into the central repository is not feasible for many reasons. To reliably move the log information from the application to the database, a message queue is used. To further decouple the application from the message queue, a small in-memory database system can be used. It also supports buffering messages during peak loads.

Since losing log files in case of failure is not tolerable, all systems involved in log shipping must be transactional.


Scenario

Figure 1 shows the sample scenario architecture you'll set up in this tutorial:

Figure 1. The tutorial's architecture
Architecture set up in this tutorial

Assume there are multiple client applications generating log files and saving them in the solidDB cache databases. IBM solidDB is a fast in-memory cache database that is optimized for high performance. Since it keeps all data in memory and does not write any data to disk, except transaction logs for persistence, it's fast but also reliable.

A Java tool will then pick the log files up and transfer them into WebSphere MQ. WMB will analyze and transfer the files to the DB2 back-end database.

To simulate application logging clients, this tutorial uses a Java tool that generates and loads XML files into the cache databases. The tool — Java Load Generator Tool — is included with this tutorial (see Downloads). By running the tool without command line options, it prints out usage information. This tutorial uses this tool to load messages into solidDB.

To ship log files into the back-end database, this tutorial uses IBM WebSphere Message Broker. It offers transactional and persistent message queues and routing mechanisms to transfer messages from one location to another. Additionally, you can use WebSphere Message Broker to analyze and transform the messages. Figure 2 shows the message flows introduced in this tutorial:

Figure 2. WebSphere message flows
WebSphere message flows

Messages are read from two queues, holding different message formats. Queue 1 (Q1) will hold messages consisting of one element named Record. Every message will either be routed to another queue (Q3) or to the back-end database.

Queue 2 (Q2) will hold File messages that consist of one or more Record elements. Each File message will be split into single Record messages and a summary message per Record. Record messages will then be routed to the back-end database, while Summary messages will be stored in another queue (Q4).

WMB can interact with IBM DB2 to store messages transactional. That way, no message can be lost due to (network) failures. IBM DB2 for Linux, UNIX, and Windows will finally store all the messages. It uses pureXML to natively store XML files in a dedicated data structure. That way, queries can be performed efficiently while preserving the original structure. DB2 also supports compression of XML files, which will save a lot of storage space.


Preparation

As a first step, you need to download and install the following software packages:

  • WebSphere MQ 7.0
  • WebSphere Message Broker 6.1
  • WebSphere Message Broker Toolkit V6.1
  • solidDB 6.3
  • DB2 9.5 Express-C

Each software package should be installed with the default options. This tutorial describes how to set up and configure them in the following sections.

Database configuration

For storing the log files, this tutorial uses a simple one-table approach with the following DDL statements for solidDB and DB2 (for production systems, enhanced features for the physical design could be applied):

Table 1. Database definitions
solidDBDB2
solidDB table layoutDB2 table layout
Listing 2. DDL statement for solidDB
CREATE TABLE LOGS (
LOGTIME TIMESTAMP,
DOC CLOB
)
Listing 3. DDL statement for DB2
CREATE TABLE LOGAPP.LOGS (
LOGTIME TIMESTAMP,
DOC XML
)

solidDB at the front end does not require a schema name. For DB2 at the back end, use LOGAPP as the schema name.

Adding new message queues

All messages read from solidDB will be stored on WebSphere MQ. Therefore, you need dedicated queues that hold your messages.

Message formats

Both File and Record are purely fictional log file formats, although they rely on a real logging scenario. Record is a single log record, holding information for a single event. File is a collection of one or more Record elements. Figure 1 shows an example file. If you'd like to know more about the two formats, have a look at the XML Schema definition files (see Downloads).

As Table 2 shows, you need five new queues (note the prefix AL_ for "Application Logging"):

Table 2. Message queues
Q1: AL_INQUEUE1
Q2: AL_INQUEUE2
These two queues will hold input messages from solidDB. The difference between the two is the format of the containing log files:
  • AL_INQUEUE1 will hold messages of type Record.
  • AL_INQUEUE2 will hold messages typed File.
Q3: AL_OUTQUEUE This queue is just a sink for messages.
Q4: AL_SUMMARYQUEUE This queue gets all log summary messages for final storage.
Q5: AL_XMLFILES Messages put in this queue will be inserted into DB2.
Figure 3. A look into WebSphere MQ Explorer
WebSphere MQ Explorer

Now you've set up your work environment and are ready to get started with application logging.

Get started with Message Broker Toolkit

This tutorial uses WebSphere Message Broker Toolkit, an Eclipse-based tool for developing message flow applications. When you start the application for the first time, you need to set up a workplace folder for all project files.

Figure 4. Selecting a workspace
Select a workspace where your message flows will be stored

Now you need to create a new message flow project that holds all flows you develop later. Right-click on the empty project list, and select New > Message Flow Project:

Figure 5. Create new message flow project
Create a new message flow project

A window pops up, asking you about the new project's name. Enter logApp as name for the new project:

Figure 6. Create new message flow project
Create a new message flow project

Message Broker connection

Next, set up a connection to the Message Broker domain instance you created earlier. Using this connection, you can interact with the Message Broker later on. To do this, right-click on your newly created project, and select New > Other:

Figure 7. New domain connection
Create a new domain connection to WebSphere Message Broker

In the opening window, look for and select the item Domain Connection:

Figure 8. New domain connection
Create a new domain connection to WebSphere Message Broker

You now need to enter the name of your Queue Manager. Since we used the default configuration, it is named WBRK61_DEFAULT_QUEUE_MANAGER, using port 2414:

Figure 9. New domain connection
Create a new domain connection to WebSphere Message Broker

Enter logAppConnection as connection name. After clicking Finish, confirm you want to create a new server project in the opening message box:

Figure 10. New domain connection
Create a new domain connection to WebSphere Message Broker

You are now connected to the Message Broker. This is necessary to deploy projects and run them.

Importing message definitions

Message definitions

Please note you can find the XML Schema definitions in the Downloads section of this tutorial.

Next, let's import the XML Schema definitions you'll be using in Message Broker Toolkit. That way, WMB will be able to recognize and parse messages coming from solidDB.

Let's import the two files into WebSphere Message Broker Toolkit. First, you need to add a Message Set to your project. This set will hold all message definitions.

Right-click on the message flow project, and select New > Message Set:

Figure 11. New message set
Create a new message set

Next, specify a name for the new message set. Since the workspace does not contain a message set project, you need to also enter a name for the new project. The new project, containing the new message set, will be added to your working set.

Name the message set logAppMessages and the project logAppMessageSet:

Figure 12. New message set
Create a new message set

You need to specify what types of data the new message set holds. Since you want to process XML data, select XML documents:

Figure 13. New message set
Create a new message set

You have now entered all information needed, and the message set is ready to be created.

Figure 14. New message set
Create a new message set

Remember to save the XML Schema definition files first

You will have to save the files on a local hard disk where the WebSphere Message Broker Toolkit can access them. The files are included with this tutorial in the Downloads section.

This message set will be the container for all custom message formats you use in this tutorial.

You are ready to import the XML Schema files into the message set. Right-click on your message set project, and select New > Message Definition File From > XML Schema File:

Figure 15. Import message definitions
Import message definitions

Next, enter the path to the log file schema documents:

Figure 16. Import message definitions
Import message definitions

Finally, choose the data types and messages you want to import. In this case, select all the boxes by clicking on the Select All button:

Figure 17. Import message definitions
Import message definitions

You have successfully imported the first XML Schema definition into WebSphere Message Broker. To import the schema LogSummary.xsd to your workspace, repeat the steps above.

You need to modify the message set to support single Record elements in your message. Double-click on the LogFile message definition in your workspace to get the definitions to open in the right pane.

Figure 18. Modifying the message definition
Modifying the message definition

You need to rename RecordType to Record, otherwise all the messages containing a Record as root element will be named RecordType, instead of Record. The renaming is already done in Figure 19:

Figure 19. Modifying the message definition
Modifying the message definition

Now you can use the definitions to create mappings between different formats.


Reading messages from solidDB using Java

This section is just a simple example of how to get messages into the queue. There are many other options, like coupling the applications to the queue using JMS (Java Message Service) or connecting directly to a queue manager using TCP/IP.

WebSphere MQ provides a JMS interface to the message queues for saving and retrieving messages using Java. It supports transactions and reliable messaging. Therefore, connecting the applications using JMS brings many advantages.

This tutorial does not use JMS, but instead uses solidDB as a cache to decouple the logging applications from the middleware and the process of log shipping.

First of all, your log messages need to be transferred from the solidDB cache into WebSphere MQ. Generally, there are several ways to interact with databases in WebSphere. But unfortunately, solidDB's ODBC driver does not provide support for WMB, and solidDB's JDBC driver does not yet support distributed transaction. These limitations narrow the choices to a custom Java program for reading messages from solidDB.

You also have to make sure to include solidDB's jar file (SolidDriver2.0.jar) in your java program or in the classpath. Otherwise, you are unable to connect to solidDB.

Listing 4, below, provides a code snippet to connect to solidDB, but before you can connect, you have to correctly fill out the parameters (host, port, user, pass):

Listing 4. Connecting to solidDB
String connString;
connString = "jdbc:solid://" + host + ":" + port + "/" + user + "/" + pass;

Class.forName("solid.jdbc.SolidDriver").newInstance();

Connection connection = DriverManager.getConnection(connString);

Statement statement;
statement = connection.createStatement();

ResultSet resultSet;
resultSet = statement.executeQuery("SELECT doc FROM LOGS");

Remember to first install WebSphere MQ Client on all systems that connect to message queues. Otherwise, you will not be able to connect to the message queue.

Listing 5 provides another code snippet to connect to the message queue:

Listing 5. Connecting to WebSphere MQ
String queueManagerName = "WBRK61_DEFAULT_QUEUE_MANAGER";
String queueName = "AL_INQUEUE1"; // or AL_INQUEUE2

MQQueueManager queueManager;
queueManager = new MQQueueManager(queueManagerName);

MQQueue queue;
queue = queueManager.accessQueue(
queueName, CMQC.MQOO_OUTPUT, null, null, null);

After you have established connections to the source and the sink of the log messages, you need to loop over the input messages and insert each of them into the queue:

Listing 6. Transferring messages from solidDB to WebSphere MQ
while (resultSet.next()) {
	byte[] message;
	message = resultSet.getBytes(1);

	MQMessage queueMessage;
	queueMessage = new MQMessage();

	queueMessage.correlationId = CMQC.MQCI_NONE;
	queueMessage.messageId = CMQC.MQMI_NONE;

	queueMessage.write(message);

	queue.put(queueMessage, queueMessageOptions);
}

Note

Successfully removing queued message from the cache database is not covered in the sample snippets. Messages will stay inside the cache database and can again be inserted to the queue. You might want to take care of deleting messages on your own.

The full Java code for these snippets are included in the Downloads section.

After you have the messages in the queue, you might want to do something with them before saving them straight to the back-end database. So let's start with a few examples of how to modify XML messages using WebSphere Message Broker.


Using WMB to analyze and transform messages

In order to use the previously created broker connection and the imported message types, you need to add references to your message flow project. To select referenced projects, open the properties page of your message flow project logApp, right-click on it, and select Properties.

In the opening window, select Project References. Click on the two check boxes on the right —logAppMessageSet and Servers.

Figure 20. Message flow project properties
Adding project references

You are now ready to create the message flows in WebSphere Message Broker Toolkit. Click on your message flow project, and select New > Message Flow:

Figure 21. New message flow
Adding a message flow to the project

Enter logFlow1 as the name for the new flow, and click on Finish:

Figure 22. New message flow
Adding a message flow to the project

The new flow will open in the upper right window, so you can add and configure nodes in it. You can define sources, sinks, transformations, and other operations to perform with messages.

Routing messages dependent on their content

Let's first take a look at how to analyze messages inside the flow and route them dependent on their content. Routing can be used to separate message types. For example, you can forward debug messages to developer machines and business-critical messages, like orders or billing information, to the persistent storage.

Figure 23 shows the message flows you will set up in this section. An MQINput node will read messages from the queues. A route node will then redirect messages to different sinks, dependent on their content. Some messages will be mapped to a different format using the mapping node.

Figure 23. Sample message flow
Message flow

First, insert an MQInput node that reads messages from the message queue and put them into your flow. In the menu to the left of the window, select WebSphere Message Broker, then drag and drop an MQInput node to the right, empty window.

This will be the source, fetching messages from the queue. You need to specify the name of the queue, where messages should be read from. Click on the newly inserted node, and have a look at the properties window at the lower right. In the field Queue name, enter AL_INQUEUE1. As mentioned, AL_INQUEUE1 will hold messages of type Record.

Note

Resetting the content descriptor is mandatory since WMQ does not recognize the bit-stream loaded into the queue as XML. Rather, WMB treats it as a BLOB until you explicitly set the content descriptor to your previously introduced data format.

The next node is a ResetContentDescriptor node (see Figure 23). This node is necessary to tell WMB the message content is XML and not just a BLOB. Insert the node and configure it, using the following preferences (as shown in Figure 24):

  • Select all the check boxes to reset message domain, reset message set, reset message type, and reset message format.
  • Specify XMLNSC for the message domain.
  • Specify logAppMessages for the message set.
Figure 24. ResetContentDescriptor node
Configuring the ResetContentDescriptor node

To see a larger version of Figure 24, click here.

Now you can use a route node to inspect the content of the message and route it dependent on its content. Assuming the queue holds messages of type Record, you want to route them to different output terminals, based on whether the value of the header element id is odd or even.

After inserting the route node into the flow, right-click on it and use the context menu to rename the output terminals to Odd and Even.

Figure 25. Route node context menu
Configuring the route node

Next, add two patterns to the node, telling it how to route messages. Select the route node in the upper panel, and take a look at its properties page:

Figure 26. Route node properties page
Configuring the route node

To make your route node's properties page look like that in Figure 26, you have to add two filter patterns to the filter table. Click on Add… to open a new dialog box, where you can enter the first expression: $Body/Record/Header/Id mod 2 = 0. This pattern applies to all messages with even message ids in their header. After adding the first pattern, repeat the steps to add the second pattern, which applies to all odd ids. Second filter pattern: $Body/Record/Header/Id mod 2 = 1. With this configuration, the messages arriving at this node will be routed to one of the output terminals based on their content. You can now connect the two output terminals to different successors.

Transforming messages into different formats

Next, you'll use a mapping node to transform messages into anther format. You need to insert a mapping node to the flow and connect one of the route node's output terminals to it. It does not matter whether you connect the odd or even output terminal. Double-click on the newly inserted node. A new window will open where the input and output message types can be specified:

Figure 27. New message mapping
Creating a new message mapping, selecting the source and target formats

Since we expect Record messages coming from this flow, select Record as input type. LogSummary will be the output type, as shown in Figure 27. In the next window, you can define mappings by either dragging and dropping elements from the left (input) side to the right (output) side or by entering the mapping expression for each element. You can also use integrated functions like fn:concat, as shown in the lower pane of Figure 28:

Figure 28. New message mapping
Creating a new message mapping, defining the mappings

To see a larger version of Figure 28, click here.

Finally, you need insert two MQOutput nodes to write messages back to the queue. Connect the remaining terminal of the route node to one of the output queues, and connect the output terminal from the mapping node to the second output node. Your flow should now look similar to the example in Figure 23. Lastly, you need to configure the MQOutput nodes and specify a destination queue for the messages. Use AL_OUTQUEUE as destination for the MQOutput node connected to the route node and AL_SUMMARYQUEUE for the one connected to the mapping node.

Splitting messages using mapping nodes

Information

Besides the solution shown in this section, WMB provides a rather easy way to split and summarize messages:
If more than a single connection to an output terminal exists, every message is copied to all connections. Modifying one of these copies does not impact the other ones. Therefore, by adding two connections to an output terminal, one connection can be used to transport the original message, while the other one can route a copy of the message to generate a summary message. However, this section shows another way to achieve the same result.

In this section, use WebSphere Message Broker to split an input message into several output messages and divide the output messages according to their format to different output terminals. Let's say you want to split a large file info smaller parts and create a summary for each parts while preserving the parts, as shown in Figure 29:

Figure 29. Splitting and summarizing a message
Schema of the message flows you will create in this section

Figure 30 shows the flow you want to set up:

Figure 30. Splitting and summarizing a message
The message flows you will create in this section

An MQInput node is connected with the already introduced ResetContentDescriptor node, followed by a mapping node (named Split). This is an extension to your previously created flow. The incoming connection from the first flow is visible at the top and attached to the MQOutput node named SUMMARY. You'll insert all nodes according to the picture first. The MQInput node will read messages from AL_INQUEUE2, so you have to make sure to configure this node as well as the ResetContentDescriptor node. The latter will use the same configuration as the first ResetContentDescriptor, so you can easily copy that one.

Note

As specified in the XML Schema, the element File consists of one or more elements Record.

Double-click on the node to open a window where you can enter the source and target message formats, as shown in Figure 31. Select File as the input type, assuming the queue holds messages of that type. And his time, select two messages as mapping/output targets: LogSummary and Record. This way, you can split one input message into more output messages.

Figure 31. Define message types for mapping
Defining message types for mapping

Now, for every message that reaches this node, you want to map each of the inner Record elements into their own message and also spawn a summary message (LogSummary) for every Record. Figure 32 and Figure 33 show the mapping. The for-Expression in the lower screenshot is iterating over the Record element in the input and generating a separate output message for each.

Figure 32. Mapping node view
Mapping node view

To see a larger version of Figure 32, click here.

Figure 33. Mapping node details
Mapping node view

To see a larger version of Figure 33, click here.

Reminder

For-Expressions can be used to iterate over repeating elements in the input.

After mapping and splitting, there is a problem you have to deal with: both message types, Record and LogSummary, are put on the same output terminal. You want to split these two messages up to different flows. Use a JavaCompute node to deal with this issue. After inserting the node to the flow, double-click on it to open an assistant that helps you create a new Java project. Replace the code in the auto-generated class with the example code in Listing 7. (The code is included in the Downloads section.)

Listing 7. Routing messages using JavaCompute
public class JavaRouteNode extends MbJavaComputeNode {
	public void evaluate(MbMessageAssembly assembly) throws MbException {
		MbOutputTerminal out = getOutputTerminal("out");
		MbOutputTerminal alt = getOutputTerminal("alternate");

		MbMessage message = assembly.getMessage();

		MbElement logMessage = message.getRootElement().getLastChild();

		if (logMessage.getFirstElementByPath("./Record") != null) {
			// Message is a complete log record
			out.propagate(assembly);
		} else if (logMessage.getFirstElementByPath("./LogSummary") != null) {
			// Message is a summary
			alt.propagate(assembly);
		}else{
			throw new IllegalArgumentException();
		}
	}
	}

Note

Instead of using a JavaCompute node, you could have used a route node. However, it's good to see that there are multiple ways to interact with messages in WMB.

This node is invoked for every message, and it performs two XQueries to decide whether the message is a Record or a LogSummary. Depending on the XQuery result, the message is either sent to the output terminal or the alternate terminal (which is another output terminal).

You've now seen how to route, transform, and analyze messages using WMB in several ways. Next, learn how to store some of the messages waiting in the queue in to the back-end database.


Saving messages into DB2

Connecting to DB2

Note

WMB provides various methods to connect to a database. This tutorial uses ODBC just for convenience. To use another connection method, refer to the WebSphere Message Broker Information Center (see Resources).

To write messages directly from WebSphere MQ to DB2, you need to first set up an ODBC connection to the target database.

To add a new ODBC connection, you have to open the ODBC data source configuration in the Windows Control Panel.

Click on Add… to create a new ODBC connection to your database:

Figure 34. ODBC Data Source Administrator
ODBC Data Source Administrator

Since you want to connect to your default DB2 instance, select IBM DB2 ODBC DRIVER – DB2COPY1:

Figure 35. Creating a new ODBC data source
Creating a new ODBC data source

In the next window, enter LOGAPPDB as the data source's name, then click on the Add button:

Figure 36. Creating a new ODBC data source
Creating a new ODBC data source

You now need to enter the user name and password WMB will use to connect to the database. Make sure you select the check box to Save password.

Figure 37. Creating a new ODBC Data Source
Creating a new ODBC Data Source

In the next pane, enter the database name, its host name, and port.

Since WebSphere Message Broker and DB2 reside on the same machine, you can enter localhost as host name. After you click on OK, you can close all the open ODBC windows and return to the WebSphere Message Broker Toolkit.

Figure 38. Creating a new ODBC data source
Creating a new ODBC data source

Importing the database layout

After configuring the database connection, you need to inform WebSphere Message Broker of the database table layout. To do this, you can directly import the database layout from DB2 by adding a connection to the Message Broker Toolkit. In the Broker application development view, switch to the Database Explorer tab and create a new database connection. A window will pop up, where you can specify connection parameters.

After filling out the form with our parameters, you can click on Finish.

Figure 39. Creating a database connection in WebSphere Message Broker Toolkit
Creating a database connection in WebSphere Message Broker Toolkit

You are now connected to the database. Next, you have to import database definitions into the toolkit to enable message mappings to database tables. Right-click on the project list, and select New > Database Definition:

Figure 40. New database definition
Creating a database connection in WebSphere Message Broker Toolkit

A new window opens. The project list is empty because the project list does not contain any data design projects. Click on New to add a data design project that will hold the database definitions.

Name the new project databaseDefinitions, and click on Next:

Figure 41. New data design project
Creating a data design project

You do not need to select any references to other projects, so click on Finish:

Figure 42. New data design project
Creating a data design project

After creating the data design project, you are taken back to the database definition window. The name of your newly created project is already selected. Make sure you select the correct DB2 version:

Figure 43. New Database Definition
Creating a database connection in WebSphere Message Broker Toolkit

In the next window, select the database connection you created earlier:

Figure 44. Creating a database connection in WebSphere Message Broker Toolkit
Creating a database connection in WebSphere Message Broker Toolkit

Now, enter the user ID and password to connect to the database:

Figure 45. New database definition
Creating a database connection in WebSphere Message Broker Toolkit

WMB Toolkit connects to the database and retrieves a list of all schemas. Select the LOGAPP schema:

Figure 46. New database definition
Creating a database connection in WebSphere Message Broker Toolkit

In the next window, select at least the table definitions:

Figure 47. New database definition
Creating a database connection in WebSphere Message Broker Toolkit

After importing the table definitions, you can use the tables with message flows.

Creating the flow

Now, you can create a new message flow that reads messages from the message queue (AL_XMLQUEUE) and stores them into DB2.

The name of the flow will be storageFlow. It consists of three nodes, an MQInput node, configured to read messages from AL_XMLQUEUE, followed by a ResetContentDescriptor node, and a DatabaseInsert node. Figure 48 shows the flow layout:

Figure 48. Message flow to store messages into DB2
Message flow to store messages into DB2

Click on the DatabaseInsert node (named Backend in Figure 48), and switch to its basic properties. Enter the name of the previously created ODBC data source (LOGAPPDB). Double-click on the new DatabaseInsert node. A new window appears, asking you about the input and output message formats. Select Record as the input message type and your database table LOGS as the output target:

Figure 49. New message map for the storage flow
Creating a message mapping

In the mapping window, you can drag and drop the elements from the left to a table column on the right. Since WMB does not know about the new pureXML features of DB2 and its ability to store XML, you cannot drag a complex type to a column. By doing this, the toolkit would create a submapping, assuming you want to map certain portions of the complex type into a single column. The toolkit expects you to concatenate or summarize sub elements to fit into a single database column.

To map the entire element with all sub elements to a database column, you need to use a function that serializes the entire document. This can be done using esql:asbitstream($source/Record), as shown in Figure 50:

Figure 50. Message map view
Mapping the message to a database table.

You are finished developing all the necessary message flow nodes, and the project should be able to run on a Message Broker.

Note: Deploying your project into Message Broker is not covered in this tutorial. To get information on how to create a Broker Archive (BAR) and deploy it in Message Broker, please refer to the WebSphere Message Broker Information Center.


Conclusion

This tutorial has shown you how to set up an infrastructure for application logging using several IBM products. IBM solidDB acts as an in-memory cache to decouple the application from the logging infrastructure. WebSphere MQ and WebSphere Message Broker are used to ship, analyze, and transform XML messages. Finally, DB2 for Linux, UNIX, and Windows and its pureXML feature are used to efficiently store and manage the XML data. There is no single product that offers the full logging infrastructure, but, as shown here, it is possible to build such an application logging infrastructure quickly. XML is at the core because it gives the flexibility to change and add to the message formats without impacting the infrastructure itself. Using DB2 with its SQL/XML and XQuery support, the XML data can be queried for failure analysis or auditing purposes.


Downloads

DescriptionNameSize
XML Schema definition file (File)LogFile.xsd4KB
ToXgene template file1LogFile.tox7KB
XML Schema definition file (Summary)LogSummary.xsd2KB
Code2JavaRouteNode.java2KB
solidDB and MQ connection code3solidDB2MQ.java2KB
Java Load Generator Tool4LoadGenerator.jar5MB

Notes

  1. This is a sample ToXgene file used to generate log files.
  2. This is sample code for WebSphere Message Broker JavaRouteNode.
  3. This is sample code to connect solidDB with WebSphere MQ.
  4. This is a tool to generate and load log files into solidDB and DB2.

Resources

Learn

Get products and technologies

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management, WebSphere, XML
ArticleID=388065
ArticleTitle=Application logging using solidDB, WebSphere Message Broker, and DB2 pureXML
publish-date=05072009