Who knew data compression could be so easy in WebSphere MQ?
Gregory(Greg)Bowman 1000007M77 Visits (9547)
As you probably already know, all too well, the size of data being used by applications seems to grow every day. As the size of the data grows, the time it takes to transfer this data grows too. This means that the users waiting on this data have to wait longer for the data to arrive. As you probably also know, the patience of these users is not growing at the same rate as the size of the data. If you want to keep your users happy, you have to make the data transfer faster. One way to do this is to compress the data before it is transferred and then expand it again after it arrives.
If you had to write your own routines to do this data compression on your messages, it could become quite complex and time-consuming to try to do data compression on your messages. Fortunately, WebSphere MQ (WMQ) has some data compression techniques included in it that are quite easy to implement. Data compression in WMQ can be done on each channel. This means you can choose whether or not you want the data to be compressed by using different channels to transfer the data. If that is not granular enough, you can even make the decision at the individual message level by using message exits. That technique is slightly more involved, so I will not go into great detail about that here. The general idea of this article is to show you how simple it can be to use data compression.
Before we get into the details of how to perform data compression in WMQ, you may be asking if it is really worth your time to set this up and how much benefit you can expect to gain from the data compression in WMQ. To answer those questions, I first should explain that there are three techniques WMQ uses for data compression and you would need to decide which of the three best fits the type of data you are sending.
The first type of compression is RLE - Run Length Encoding. This data compression can be done very quickly but it is really only useful if your data consists of lots of repeated characters. For example, if you have a lot of blank padded names in your data, then RLE would be a good choice.
If your data is more complex, then you may benefit more by using either ZLIBHIGH or ZLIBFAST as your compression technique. Both of these techniques make use of standard ZLIB compression libraries. The ZLIB compression libraries originated from the authors of ZIP and gzip routines and they are commonly used in many different software packages. WMQ offers two different compression techniques that you can select based upon how much time you want to spend on compressing the data. ZLIBFAST will do a quicker compression of the data but the data will not be as compressed. ZLIBHIGH will use the same techniques but will spend more time and compress the data more.
The rate of data compression is going to be highly dependent on the specific data in your messages. As an example, a message of 2590 bytes of simple text was used. Each 72 byte line in the message was padded with blanks to fill the line. The message was sent over a channel using each of the different compression techniques. Here is a table showing the rate of compression and the time in milliseconds that it took to complete that compression.
Remember that these are some very simplistic tests with some very simplistic data. You would have to do your own testing with your own data to determine which technique suits your data and your desired results the best.
So now we finally get to point I promised in the title. It really is quite simple to set up data compression in WMQ. For each channel, you have two parameters, COMPHDR and COMPMSG. COMPHDR lets you define what kind of data compression you want to do on the message headers. COMPMSG lets you define the kind of compression on the message data itself. Like most channel parameters, the compression settings will be negotiated between the sending and receiving ends of the channel. When you specify COMPMSG or COMPHDR, you list the techniques in the order of preference. The list on the sender (SDR) side is compared to the list on the receiver (RCVR) side until a match is found. Typically, you would have total control over the settings on each side of the channel and you only need to specify one value per parameter on each side. For example:
SDR : COMPHDR(ZLIBHIGH) COMP
However, in the cases where you are not sure which values are going to be set at the partner, the ability to set up the channels to be able to do multiple techniques is allowed. For example:
SDR : COMPHDR(ZLIBHIGH, ZLIBFAST, RLE, NONE) COMPMSG(ZLIBHIGH, ZLIBFAST, RLE, NONE)
RCVR : COMPHDR(ZLIBFAST, RLE, NONE) COMPMSG(RLE, NONE)
ZLIBFAST compression will be used for the header data because that was the first match for COMPHDR,
RLE compression is used is for the message data because that was the first match for COMPMSG.
There is a technote entitled "Why does the WMQ channel status show compression values were negotiated but not used" that will help you understand the values in the display channel status output and it will help you interpret the results of your testing.
As I mentioned earlier, if you want to get more complex about the data compression, WMQ will allow you to do so. You can use message exits to alter messages to define whether message compression will be done on an individual message. I will not go into much detail about this, but the sending channel message exit will have access to the MQCD and MQCXP. The MQCD compression attributes contain the negotiated mutually supported compression techniques. The MQCXP compression attributes contain the compression techniques to be used for the particular message. Your exit code could manipulate those to set the compression to values you want to be used for an individual message.
As you might imagine, you can get much more complex about data compression if you have the desire and need to do so, but you will likely find that it is just a simple matter of setting your channel attributes for COMPHDR and COMPMSG to do all the data compression you will need to do.