So, you need to limit the local ports used for outbound channels. This can be for a variety of reasons that really are not important to this article. What is important is a particular gotcha that could come into effect. If you are TCP/IP guru you are aware of the 2MSL (2 times maximum segment life). If you are not a TCP/IP guru, 2MSL refers to the maximum time any particular TCP packet could be alive in the network. That value is generally configured at 2 minutes, but can be lower. The actual value is determined by each TCP/IP implementation.
Why in the world am I writing about that in a blog entry to WebSphere MQ (WMQ) readers you ask? If you use LOCLADDR to limit the number of available local TCP/IP ports for channels then it can come into play easy enough. Let's take a particular scenario... You add the following to your WMQ configuration:
DEFINE CHANNEL(TO.BETA) CHLTYPE(SDR) CONNAME(192.0.2.1) XMITQ(BETA) LOCLADDR('192.0.2.2(9999)')
In the above definition sender channel, TO.BETA, will be assigned a local IP address of 192.0.2.2 and local port 9999. It will connect to remote IP address 192.0.2.1. That is all well and good. However, if there is a problem and the connection is broken it can take up to 2 minutes for the connection to be re-established because only a single local port is available on the TO.BETA channel and TCP/IP cannot establish a duplicate connection until the 2MSL expires. Again, this is only an issue if there is some type of a problem where the TCP/IP connection is broken.
This blog entry is not a recommendation NOT to use LOCLADDR, but as an awareness to the potential of delays in reconnecting.
For your further reading and knowledge on the subject of LOCLADDR please refer to the following Technotes: