IBM Support

Some comments on the usage of Load Balancers (such as F5 / BIG-IP) by MQ

Question & Answer


Question

Your team is planning to use Load Balancers, such as F5 / BIG-IP, between IBM MQ client applications and IBM MQ queue managers.
You would like to know if there are disadvantages or side-effects that need to be taken into consideration.

Answer

+++ Considerations regarding IP Balancers and MQ:
- MQ requirement: The initial TCP packet could be sent to any of the queue managers in the network, but once the connection is established then any further TCP packets for the same connection must always be routed to the same queue manager (same server).
- Some Load balancers might send ping requests to check the availability of the ports. If MQ receives any data that does not follow MQ  communication protocol FAP, then MQ generates FDCs to indicate the data received is invalid. It is possible to disable these FDCs in some cases.
 
- If no data is sent from Load balancer for ping requests then MQ assumes that the connection is from the load balancer and does not cut an FDC.
- All types of applications may not work well with IP load balancer.
 
- Client applications expecting response from the same queue manager where the request was sent.
For example, if a client application sends a request to a queue manager and expects response from the same queue manager and if the client disconnects before getting a response from a queue in a specific queue manager and if the subsequent connection request goes to a different queue manager then the client will not get the response.
- Applications which makes two connections, one for put and one for get - these could go to different queue managers/
- XA Clients - these are not supported with load balancer unless all possible back-ends are within one z/OS QSG. Any other non-z/OS platform as the queue manager does not support XA Clients when load balanced.
- It is recommended to perform careful testing from the application perspective to ensure that the setup works before making any decisions.
+++ Considerations for MQ JMS client applications
 
- The MQ JMS code will always attempt to connect the JMS Connection and the JMS Session to the same queue manager. However, MQ will have no control over what the network or F5 Bib-IP load balancer does with that connection request. It is quite possible that an F5 load balancer could send them to different queue managers.
- The use of SHARECNV attribute in the server connection channel has no bearing over what happens here. SHARECNV(1) will always create a separate TCP connection, but even if you used SHARECNV(10) the connection request could be the 10th request and be part of one TCP connection and the session request could be the first request in a subsequent TCP connection and be routed by F5 to a different place.

- You could exploit the concept of a Queue Manager Group and you should always give each queue manager a unique name, and use an "*" (asterisk) at the beginning of your connection queue manager string (or just an asterisk) to allow the JMS connection to succeed regardless of the actual queue manager name.

You need to disable XA transactions on your connection factories, and will not be able to exploit XA within your transactions with MQ. 
XA in-doubt transaction resolution relies on the ability to re-connect to the same resource manager (MQ queue manager) during the xa_recover phase, and workload balancing your connections via an F5 prevents this. So in-doubt transactions could be resolved incorrectly during recovery, resulting in message loss/duplication, or the need to manually resolve transactions with no information on the correct decision to make.

- You should only use the F5 to load balance your outbound Connection Factory connections (we call this 'outbound' even if you're doing receive calls to gather responses in request/reply). 
Activation specifications for MDB message listeners should bypass the F5 and connect directly to the queue managers. If you have multiple queue managers, then you should configure multiple endpoints, one per queue manager. 
- The F5 could cause the child sessions of an MDB connection to connect to a different queue manager to the parent browsing connection.
- Also, it does not usually make sense to WLM inbound connections, as you can end up with stranded messages if you have queue instances that no application instances are listening too.
 
- An important code fix regarding the use of Load Balancers is the following:
https://www.ibm.com/support/pages/apar/IT27240
APAR IT27240 "Extremely short-lived TCP connections incorrectly generate AMQ9213E errors, A communications error for TCP/IP occurred, in the queue manager error log (ibm.com)"
This APAR was fixed in MQ 9.1.0.2 LTS.
While the usage of TCP Half-open (https://en.wikipedia.org/wiki/TCP_half-open) is still more efficient, using TCP will no longer generate FDCs and its impacts are negligible in modern systems, especially if no data is sent.
 
+++ end +++

[{"Business Unit":{"code":"BU048","label":"IBM Software"},"Product":{"code":"SSYHRD","label":"IBM MQ"},"ARM Category":[],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB67","label":"IT Automation \u0026 App Modernization"}}]

Document Information

Modified date:
04 September 2024

UID

ibm16151857