There are some tuning options that you can set to improve throughput of MQ channels on z/OS. These may involve MQ and TCP changes. See here for instructions for distributed servers because you need to change both ends of the channel
(This entry was updated February 2017 to add comments about Outbound Right Sizing(ORS) introduced in z/OS 2.2)
An MQ sender channel gets a message from the XMIT queue, and sends the data in chunks of up to 32KB. At the end of the batch of data, the remote queue manager sends back a small end-of-batch response.
TCP uses various techniques to control the flow of data between two end points. One of these is known as a send window (this is well documented). This send window can increase and decrease if the receiving application changes the rate at which it receives the data.
The size of the send window is determined automatically by TCP/IP, but it can be influenced by configuring the receive buffer size on the remote host.
The send window only effects how much TCP can send. Write Blocking only comes into effect when there is no send window available AND the senders TCP send buffer becomes filled.
By default it uses the values in SYS1.TCPPARMS(PROFxx) for TCPCONFIG: TCPSENDBFRSIZE xxxx TCPRCVBUFRSIZE yyyy. These may not be in the configuration, in which case defaults apply.
- In z/OS 2.1 the default sizes are 64KB (65536 bytes), before this the default sizes were 16KB
An application can configure a socket by using the setsockopt() function with the SO_RCVBUF parameter to specify the receive buffer size, and setsockopt() with the SO_SNDBUF parameter to specify the senders buffer size.
The maximum receive buffer size is specified in SYS1.TCPPARMS(PROFxx), TCPCONFIG, TCPMAXRCVBufrsize
- The default maximum size is 256 KB, for z/OS 2.1 the maximum size is 2MB, before this it was 512KB.
For z/OS 2.1 the maximum send buffer size is specified in SYS1.TCPPARMS(PROFxx), TCPCONFIG, TCPMAXSENDBUFRSIZE
- The default maximum size is 256 KB, the maximum size is 2MB
The amount of data on the network is limited by the smaller of the send buffer size and the receive window size.
Space in the send buffer is not released until TCP receives confirmation of receipt of the packet by the other end of the conversation. A larger send window size, means more packets can be inflight between sender and receiver, and less synchronization between the sender application and the network. If the size of the send window is too small, only a small number of packets (of data) can be sent before TCP/IP blocks the sending application. Even if there is plenty more bandwidth available, it cannot be used unless the send window can be scaled up, and that cannot be done if the send buffer is too small.
Network latency also plays a role here. On high latency networks the longer delays while waiting for acknowledgements can greatly limit the effective bandwidth between the two locations. When high latency is a factor, the send window (the receiver's TCP receive window) and the sender's TCP send buffer must be further increased in order to make full efficient use of the network link.
With modern networks, a send window of 16KB or even 64KB is often too small.
If the initial receive buffer size is 64KB or greater, then TCP can use a technique called Dynamic Right Sizing (DRS) to change the size of this send window. TCP will gradually increase the size of the send window until the optimum size is found.
If the receiving application is slow to process the data in the buffers, DRS will be disabled. In z/OS 2.1 and before, once it has been disabled it remains disabled until the connection is restarted.
Customers have increased TCPRCVBUFRSIZE and TCPSENDBFRSIZE to 256KB and got a major increase in throughput. This is system wide, so test it before changing these values. Also note that these values maybe still too small, and may need to be increased. In z/OS CommServer V2R2 autonomics have been added to dynamically grow send and receive buffers based on network flows between the two streaming hosts.
The network condition varies significantly in different customers' environment, so you need to test, tune, and iterate to find out the best values for your environment.
How can I tell what sizes my buffers are – and if DRS is being used?
The TSO NETSTAT CONFIG command reports the default receive buffer size, the default send buffer size, and the default maximum receive buffer size. For example
TCP Configuration Table:
DefaultRcvBufSize: 00065536 DefaultSndBufSize: 00065536
DefltMaxRcvBufSize: 00262144 (256KB)
The NETSTAT command for the receiver end of the connection, such as TSO NETSTAT ALL (IPPORT nnnn where nnnn is the port number.
reports information like
RcvWnd: 0000131072 (128KB)
For the sender the information was
when the channel first started, and was
SndWnd: 0000524288 (512KB)
after it had sent lots of messages, showing the buffer size has increased.
For the receiving end, the byte field TcpPrf gives information about DRS.
If this has x'80' set then DRS indicates the connection is eligible for DRS.
If this has x'40' set then DRS is being used.
If bit X'02' is set then DRS was enabled, but has now been disabled. If it has been disabled, the receive window size (RcvWnd) is reset to the initial value.
TcpPrf: E0 shows that DRS is enabled.
SndWnd: 0000524288 (512KB)
MaxSndWnd: 0000524288 (512KB)
The send window (SndWnd) advertised from the remote host has increased to 524288 and has not decreased.
In z/OS 2.2 there is now Outbound Right Sizing(ORS). This seems to be better than DRS.
You can tell if this is being used by looking at the TcpPrf2 field.
40 .1.. .... If outbound right sizing (ORS) is active for this connection, the stack expanded the send buffer beyond its original size.
20 ..1. .... Indicates that this connection is eligible for ORS optimization support.
10 ...1 .... Indicates that ORS is active for this connection so that the stack automatically tunes the send buffer size. The SendBufferSize field shows the current size of the send buffer for this connection.
How do I set these send and receive buffer sizes?
There are several ways of setting the values of the buffer sizes
1. Changing the values of TCPSENDBFRSIZE and TCPRCVBUFRSIZE in the TCPPARMS(PROFxx) member. This will change the values for all connections so you should take care if you change these values.
2. You may need to change the value of TCPMAXRCVBufrsize (and TCPMAXSENDBUFRSIZE) in the TCPPARMS(PROFxx) member to allow applications to use big buffers (specify 2M) - even if you do not want to change the values of TCPSENDBFRSIZE and TCPRCVBUFRSIZE. You cannot set TCPSENDBFRSIZE nor TCPRCVBUFRSIZE to a value greater that the maximum configured TCPMAXRCVBufrsize or TCPMAXSENDBufrsize.
3. MQ has some tuning option to set the the buffer size values.
For MQ V8 you can use the commands
+cpf RECOVER QMGR(TUNE CHINTCPRBDYNSZ nnnnn)
+cpf RECOVER QMGR(TUNE CHINTCPSBDYNSZ nnnnn)
Which sets the SO_RCVBUF and SO_SNDBUF for the channels to the size in bytes specified in nnnnn.
You can display the current values, for example +cpf RECOVER QMGR(TUNE CHINTCPRBDYNSZ)
For V710 contact IBM service for instructions on how to change the buffer sizes.
You will need to change the configuration at each end of the channel. See the appropriate documentation, for example see RcvBuffSize in the MQ distributed configuration file QM.INI