[z/OS]

Address space storage

Use this topic for basic guidance on address space requirements for the IBM® MQ components.

Storage requirements can be divided into the following categories:

In a 64 bit address space, there is a virtual line called the bar that marks the 2GB address. The bar separates storage below the 2GB address, called below the bar, from storage above the 2GB address, called above the bar. Storage below the bar uses 31 bit addressability, storage above the bar uses 64 bit addressability.

You can specify the limit of 31 bit storage by using the REGION parameter on the JCL, and the limit of above the bar storage by using the MEMLIMIT parameter. These specified values can be overridden by MVS™ exits.

Attention: A change to how the system works has been introduced. Now, Cross-system Extended Services (XES) allocates 4GB of storage in high virtual storage for each connection to a serialized list structure.

Prior to this change, this storage was allocated in data spaces. After application of this APAR, based on the way IBM MQ calculates storage usage, messages CSQY225E and CSQY224I might be issued, indicating Queue manager is short of local storage above the bar.

You will also see an increase to the above bar values in message CSQY220I

For more information, see the IBM support document 2017139.

Suggested region sizes

The following table shows suggested values for region sizes.

Table 1. Suggested definitions for JCL region sizes
Definition setting System
Queue manager REGION=0M, MEMLIMIT=3G
Channel initiator REGION=0M

Common storage

Each IBM MQ for z/OS® subsystem has the following approximate storage requirements:
  • CSA 4KB
  • ECSA 800KB, plus the size of the trace table that is specified in the TRACTBL parameter of the CSQ6SYSP system parameter macro. For more information, see Using CSQ6SYSP.
In addition, each concurrent IBM MQ logical connection requires about 5 KB of ECSA. When a task ends, other IBM MQ tasks can reuse this storage. IBM MQ does not release the storage until the queue manager is shut down, so you can calculate the maximum amount of ECSA required by multiplying the maximum number of concurrent logical connections by 5KB. The number of concurrent logical connections is the sum of the number of:
  • Tasks (TCBs) in Batch, TSO, z/OS UNIX System Services, IMS, and Db2® stored procedure address space (SPAS) regions that are connected to IBM MQ, but not disconnected.
  • CICS® transactions that have issued an IBM MQ request, but have not terminated
  • JMS Connections, Sessions, TopicSessions or QueueSessions that have been created (for bindings connection), but not yet destroyed or garbage collected.
  • Active IBM MQ channels.

You can set a limit to the common storage, used by logical connections to the queue manager, with the ACELIM configuration parameter. The ACELIM control is primarily of interest to sites where Db2 stored procedures cause operations on IBM MQ queues.

When driven from a stored procedure, each IBM MQ operation can result in a new logical connection to the queue manager. Large Db2 units of work, for example due to table load, can result in an excessive demand for common storage.

ACELIM is intended to limit common storage use and to protect the z/OS system, by limiting the number of connections in the system. It should only be set on queue managers that have been identified as using excessive quantities of ECSA storage. See the ACELIM section in Using CSQ6SYSP for more information.

To set a value for ACELIM, first determine the amount of storage currently in the subpool controlled by the ACELIM value. This information is in the SMF 115 subtype 5 records produced by statistics CLASS(3) trace.

IBM MQ SMF data can be formatted using SupportPac MP1B. The number of bytes in use in the subpool controlled by ACELIM is displayed in the STGPOOL DD, on the line titled ACE/PEB.

For more information about SMF 115 statistics records, see Interpreting IBM MQ performance statistics.

Increase the normal value by a sufficient margin to provide space for growth and workload spikes. Divide the new value by 1024 to yield a maximum storage size in KB for use in the ACELIM configuration.

The channel initiator typically requires ECSA usage of up to 160KB.

Queue manager private region storage usage

IBM MQ for z/OS can use storage above the 2GB bar for some internal control blocks. You can have buffer pools in this storage, which gives you the potential to configure much larger buffer pools if sufficient storage is available. Typically buffer pools are the major internal control blocks that use storage above the 2GB bar.

Each buffer pool size is determined at queue manager initialization time, and storage is allocated for the buffer pool when a page set that is using that buffer pool is connected. A new parameter LOCATION (ABOVE|BELOW) is used to specify where the buffers are allocated. You can use the ALTER BUFFPOOL command to dynamically change the size of buffer pools.

To use above the bar (64 bit) storage, you can specify a value for MEMLIMIT parameter (for example MEMLIMIT=3G) on the EXEC PGM=CSQYASCP parameter in the queue manager JCL. Your installation might have a default value set.

You should specify a MEMLIMIT and specify a sensible storage size rather than MEMLIMIT=NOLIMIT to prevent potential problems. If you specify NOLIMIT or a very large value, then an ALTER BUFFPOOL command with a large size, can use up all of the available z/OS virtual storage, which will lead to paging in your system. You might need to discuss the value of MEMLIMIT with the z/OS system programmer, in case there is a system-wide limit on the amount of on storage that can be used.

Start with a MEMLIMIT=3G and increase this size when you need to increase the size of your buffer pools.

Calculate the value of MEMLIMIT as 2GB plus the size of the buffer pools above the bar, rounded up to the nearest GB. Set MEMLIMIT to a minimum of 3GB, and increase this as necessary when you need to increase the size of your buffer pools.

For example, for 2 buffer pools configured with LOCATION ABOVE, buffer pool 1 has 10,000 buffers, buffer pool 2 has 50,000 buffers. Memory usage above the bar equals 60,000 (total number of buffers) * 4096 = 245,760,000 bytes = 234.375MB. All buffer pools regardless of LOCATION will make use of 64 bit storage for control structures. As the number of buffer pools and number of buffers in those pools increase this can become significant. Each buffer requires around an additional 200 bytes of 64 bit storage. For a configuration with 10 buffer pools each with 20,000 buffers that would require: 200 * 10 * 20,000 = 40,000,000 equivalent to 40MB. You can specify 3GB for the MEMLIMIT size, which will allow scope for growth (40MB + 200MB + 2GB which rounds up to 3GB).

For some configurations there can be significant performance benefits to using buffer pools that have their buffers permanently backed by real storage. You can achieve this by specifying the FIXED4KB value for the PAGECLAS attribute of the buffer pool. However, you should only do this if there is sufficient real storage available on the LPAR, otherwise other address spaces might be affected. For information about when you should use the FIXED4KB value for PAGECLAS, see IBM MQ Support Pac MP16: IBM MQ for z/OS - Capacity planning & tuning

Before you use storage above the bar, you should discuss with your z/OS systems programmer to ensure that there is sufficient auxiliary storage for peak time usage, and sufficient real storage requirements to prevent paging.

Note: The size of memory dump data sets might have to be increased to handle the increased virtual storage.

Making the buffer pools so large that there is MVS paging might adversely affect performance. You might consider using a smaller buffer pool that does not page, with IBM MQ moving the message to and from the page set.

You can monitor the address space storage usage from the CSQY220Imessage that indicates the amount of private region storage in use above and below the 2GB bar, and the remaining amount.

Channel initiator storage usage

There are two areas of channel initiator storage usage that you must consider:
  • Private region
  • Accounting and statistics
Private region storage usage

You should specify REGION=0M for the CHINIT to allow it to use the maximum below the bar storage. The storage available to the channel initiator limits the number of concurrent connections the CHINIT can have.

Every channel uses approximately 170KB of extended private region in the channel initiator address space. Storage is increased by message size if messages larger than 32KB are transmitted. This increased storage is freed when:
  • A sending or client channel requires less than half the current buffer size for 10 consecutive messages.
  • A heartbeat is sent or received.
The storage is freed for reuse within the Language Environment, however, is not seen as free by the z/OS virtual storage manager. This means that the upper limit for the number of channels is dependent on message size and arrival patterns, and on limitations of individual user systems on extended private region size. The upper limit on the number of channels is likely to be approximately 9000 on many systems because the extended region size is unlikely to exceed 1.6GB. The use of message sizes larger than 32KB reduces the maximum number of channels in the system. For example, if messages that are 100MB long are transmitted, and an extended region size of 1.6GB is assumed, the maximum number of channels is 15.

The channel initiator trace is written to a data space. The size of the data space storage, is controlled by the TRAXTBL parameter. See ALTER QMGR.

Accounting and statistics storage usage

You should allow the channel initiator access to a minimum of 256MB of virtual storage above the bar. You can do this by specifying MEMLIMIT=256M.

If you do not set the MEMLIMIT parameter in the channel initiator JCL, you can set the amount of virtual storage above the bar using the MEMLIMIT parameter in the SMFPRMxx member of SYS1.PARMLIB, or from the IEFUSI exit.

If you set the MEMLIMIT to restrict the above bar storage below the required level, the channel initiator issues the CSQX124E message and class 4 accounting and statistics trace will not be available.

Managing the MEMLIMIT and REGION size

Other mechanisms, for example the MEMLIMIT parameter in the SMFPRMxx member of SYS1.PARMLIB or the IEFUSI exit might be used at your installation to provide a default amount of virtual storage above the bar for z/OS address spaces. See memory management above the bar for full details about limiting storage above the bar.

Shared Message Data Set (SMDS) buffers and MEMLIMIT

When running messaging workloads using shared message data sets, there are two levels of optimizations that can be achieved by adjusting the DSBUFS and DSBLOCK attributes.

The amount of above bar queue manager storage used by the SMDS buffer is DSBUFS x DSBLOCK. This means that by default, 100 x 256KB (25MB) is used for each CFLEVEL(5) structure in the queue manager.

Although this value is not too high, if your enterprise, or enterprises have many CFSTRUCTs, some of them might allocate a high value of MEMLIMIT for buffer pools, and sometimes they have deep indexed queues, so in total, they might run out of storage above the bar.