At a high level there are not many things you have to worry about when setting up a queue manager. But they are important.
The most important is setting up your log data sets.
- Each active log data set can up up to 3GB for V7.1.0(4GB for V8) in size if you are using archiving because there can be problems reading archives greater than this. If the logs are too small you get a lot of checkpoint activity
- Use striped datasets. This means the log is spread across multiple DASD volumes and increases the throughput. When using striped dataset, it allocates the size you specify on each volume. So if you allocate 250 Cyl with 4 stripes the data set will be 1000 cylinders in total. In our testing we found 4 stripes prer log was optimal. You can check this using this rexx program.
- Check the I/O response of the DASD used by the logs, using RMF or other montitor. At Hursley we get most I/Os under 1 ms.
- At Hursley we get between 180 and 200MB of data logged per second to each log. See here how to check.
- DASD is relatively inexpensive. If you are using archives you should allocate enough logs (up to 31), so you can continue logging even if there is a problem with archiving - such as no space available.
- Set OUTBUFF=4000 in CSQ6LOPG. This gives you the maximum log buffers. Running out of buffers is bad news, as applications logging have to wait until log I/O has finished and freed up some buffers.
The second most important thing is buffer pools.
- You want to avoid buffer pools filling up.
- If applications can keep data in the buffer pool then the performance is good.
You need to monitor how full the buffer pool is.
As the buffer pool gets over 75% busy then a background task starts writing pages from the buffer pool out to the page set. This has small impact on the applications.
If the buffer pool gets over 95% busy then every update to a 4K page requires a write to the log with wait, followed by a write to page set and wait.
Consider putting a 20KB message using 6 pages in the buffer pool
So when the buffer pool is empty there is no I/O to the page set, and just a commit. The commit make may take 4ms. So a total time of about 4ms.
If the buffer pool is more than 95% full, there will be a synchronous log write - taking perhaps 3ms, and a page set write taking perhaps 3 ms so 6ms per page. This happens 6 times so 6 * 6 = 36 ms fpor the message
So when the buffer pool has filled up is is many times slower (36:4) compared to when the buffer pool is not fully. With bigger messages this difference gets bigger.
- If your buffer pools get over 95% utilised then message CSQP020E is generated.
- If you expect your buffer pools to fill up during normal use they should be striped. When using striped dataset, it allocates the size you specify on each volume. So if you allocate 250 Cyl with 4 stripes the data set will be 1000 cylinders in total.
- You can dynamically change the size of the buffer pools.
Make sure MQ and CHINIT have appropriate priority.
You should make the queue manage the same WLM classification as DB2. The Chinit is an important application so should be similar to CICS.
This is an archictecture/application consideration. You should try to avoid unnecessary work where possible.
- Trigger every should be used with care. For a message rate over about one message a second then it is more efficient in a CICS environment to have long running tasks which loop processing the messages. This avoid generating a trigger message, the trigger monitor getting the message and starting the transaction. You may want to have logic in your applicaiton which periodically looks at the queue depth, and if this is over a threshold, start a new instance of itself. If the application gets no message found then it can end, unless it is the last one running. Of course you can use trigger first to start the transaction on the first place.
- Having too many applications getting from a queue can be expensive. if a message arrives on a queue, in some cases all applications instances waiting on the queue can be posted, so they all rush to get the message, and only one will get it. So there will be many instances when there was no message found. Each of these uses CPU
- If you are putting multiple messages to the same queue, use MQOPEN, MQPUT, MQPUT.. MQCLOSE instead of multiple MQPUT1. For example every time you open a queue either as part of MQOPEN or MQPUT1 a security check will be made. Using multiple MQPUT1s means you are doing unnecessary work.