I've got a problem around retry intervals for messages being pulled off a queue by the DataPower - namely, all the retries for a message seem to happen instantaneously, rather than at the specified retry interval.
I currently have retries configured thusly:
A Multi Protocol Gateway with an MQ FSH configured to use a PRIMARY QM object performs main message processing (transformation from B2B to A2A doc type, signing, etc) and attempts to forward the message to an endpoint.
If the endpoint cannot be reached, the message is placed on a RETRY queue specific to that operation.
A second MPGW with an MQFSH configured to use a RETRY QM object reads the message from the RETRY queue, and attempts to forward it to the endpoint.
The RETRY QM object has the following relevant settings:
In Units of Work and Backout:
Units of Work: 1
Automatic Backout: On
Backout Threshold: 10
Backout Queue Name: SYSTEM.DEAD.LETTER.QUEUE
In Retry Behaviour:
Automatic Retry: On
Retry Interval: 20seconds
Retry Attempts: 5
Long Interval Retry: 1800seconds
Reporting Interval: 1
Since it seemed relevant, the Results object in the client-to-server rule in the processing policy for the RETRY MPGW has the following relevant settings, which suggest is is doing nothing to supplement or override the default behaviour:
Number of Retries: 0
Retry Interval: 1000
I am seeing 10 retry attempts appearing in the debug probe for the Retry MPGW more or less instantaneously - I flush the probe, send the message to the main message processor with a deliberately bogus endpoint. refresh the probe immediately, and can see ten attempts to send the mesage right away, so the settings in Retry Behaviour seem to be being ignored entirely, both in terms of the number of retries, and the retry interval.
What precisely am I doing wrong here? Any ideas?
Thanks in advance,
Pinned topic Retry Interval before message backout with Websphere MQ Front Side Handler
Answered question This question has been answered.
Unanswered question This question has not been answered yet.
Updated on 2013-03-07T01:21:13Z at 2013-03-07T01:21:13Z by SystemAdmin
Re: Retry Interval before message backout with Websphere MQ Front Side Handler2013-03-06T14:18:19ZThis is the accepted answer. This is the accepted answer.Hi,
what is the 'Retrieve Backout Settings' on MQ FSH ?
If turned on, MQ server settings take precedence.
Regarding Probe - sometimes you might receive events in a flash and sequence of events is not always as expected. I have observed the same. (IBM guys can correct me)
I would suggest logging time in log file. You may want to use 'MQ puttime' from incoming MQMD header.
What version of firmware you are using?
Once you check MQ FSH settings, you may want to try this for debug purpose only
In MQ QM Object
Set max connections to 1. Initial connections to 1 and cache timeOut to as low as you can.
This is to make sure, there is only one connection from DP for the queue.
From Dp help
Specify the time interval (in seconds) between attempts to retry the failed connections to a remote host.
Note: This setting does not affect attempts to PUT or GET messages over an established connection.
msiebler 2700005RPQ141 Posts
Re: Retry Interval before message backout with Websphere MQ Front Side Handler2013-03-06T14:53:29ZThis is the accepted answer. This is the accepted answer.to expand upon vish's answer; if you dig in the documentation you should see that the MQ QM retry settings are not a 'transactional' retry ; they are just a retry for the tcp connection from the client to the server ; if the connection were to be broken.
To do what you wish would require some explicit logic in the second MPGW; such as a stylesheet to do the retries.
Re: Retry Interval before message backout with Websphere MQ Front Side Handler2013-03-07T01:21:13ZThis is the accepted answer. This is the accepted answer.
- msiebler 2700005RPQ
"Retrieve Backour Settings" is off in this instance... I tried it both ways prior to posting.
As far as the speed at which the messages appear in the probe vs lag, my test consists of sending a test message from SoapUI to the WSP fronting the original queue (the error handler for the MPGW that reads the message /from/ this queue reposts the message to the retry queue if the backend URL for the proxied service can't be reached), and then immediately alt-tabbing to the probe window for the MPGW that does the retries, by which time all ten retry attempts are already in the (previously flushed) probe. My retry interval is 10s, so if things are working properly I shouldn't see them all at once unless the probe can predict the future :)
That latter point (and msiebler's expansion of same) clarifies what's going on somewhat... I'd misinterpreted the meaning of those settings, but that amply explains to me why it wasn't working, and convinces me to find a different solution.
A consultant has recommended the use of a message count monitor on the applicable Multi Protocol Gateway to achieve this behaviour, so that if the endpoint is down, the MPGW will reject further messages for a set interval to give it time to come back up.
I've attempted to implement this... On my MPGW, I've created a couple of different monitors (not at the same time) to that end, but haven't had much success in achieving a different result. I've just been figuring out how to build them from context, though, so any advice here will almost certainly be useful - most fields were left as their default values, attempting to trigger on errors and matching all documents, with the result being a ten second moratoriam on processing further messages.
Message Count Monitor
Message Type: (new Message Type matching with all fields left as default, to catch everything)
Thresholds / Filters: Interval: 10000 Rate Limit: 10
Action: Type: Reject, Log priority: debug, Block Interval: 10000ms