Handling MQInput node errors

The MQInput node has its own error handling for message flows. Consider how you want your solution to handle errors when your message flow contains an MQInput node.

Warning: When you set up your WebSphere® MQ queue manager, configure a backout queue, or a dead letter queue (DLQ), or both, to ensure that failure errors are handled correctly. If you do not configure a backout queue or DLQ, you might risk message loss, as described in Handling rollbacks and then attempting to process the message again. If your solution requires that you do not use either of these queues, consider how you will handle failure errors in your message flow design.

Internal MQInput node error conditions

The MQInput node handles errors as described in Handling errors in message flows. In addition, the MQInput node detects an internal error in the following situations:
  • The queue manager that is specified on the MQInput node cannot be reached. For example, it might be stopped, or not yet created. The message flow that contains the MQInput node deploys, but an error shows in the syslog that messages cannot be read from the specified queue. When the queue manager becomes available, the MQInput node automatically starts reading messages from the queue. This error also occurs if the queue manager is available, but the specified queue is unavailable.
  • If a connection to a queue manager is lost, and that connection is enlisted in a transaction, automatic reconnection is delayed until the inflight transaction is complete. Other queue managers that are required, but are not part of the transaction, reconnect automatically without delay. A completed transaction can include the following cases:
    • The unavailable MQ resource was not required and was not used, because exception handling was defined in the message flow, so the inflight transaction was successful and committed.
    • The unavailable MQ resource was required, and the message flow cannot succeed without it, so the inflight transaction was rolled back.
  • An exception occurs when the associated message parser is initialized. If the Parse timing property is set to Immediate or Complete, the parser parses the input message after initialization. This parsing can cause a parsing or validation error, which is treated as an internal error.
  • A warning is received on an MQGET call. This condition is handled in the same way as a parsing error.
  • If an error occurs in a transactional message flow, and a Failure terminal is not connected:
    1. The message is rolled back to the input queue.
    2. The MQInput node writes the error and MQPUT error to the local error log, and starts its retry logic, as described in Handling rollbacks and then attempting to process the message again.
  • When a message is rolled back to the input queue, and the WebSphere MQ backout threshold is reached. This case is described in Handling rollbacks and then attempting to process the message again.

MQ connection errors

The MQInput node attempts to connect to the queue manager when the flow is deployed and started. The MQOutput, MQGet, and MQReply nodes attempt to connect when the first message is sent or received. If any connection problems occur, see the WebSphere MQ product documentation for information about any mqrc return code values that are reported in the IBM® Integration Bus BIP messages.

Handling rollbacks and then attempting to process the message again

When a message is rolled back to the input queue, the MQInput node attempts to process the message again. The MQInput node checks whether the backout threshold value has been reached. The BackoutCount for each message is maintained by WebSphere MQ in the MQMD.

When you create the queue, you can specify the value of the backout threshold attribute BOTHRESH, or use the default, 0. If you accept the default value of 0, the MQInput node increases this value to 1. The MQInput node also sets the value to 1 if it cannot detect the current value. If the backout threshold is set to 1, the message is processed once, and does not try again through the Out terminal of the MQInput node. For the message to try again at least once, set the backout threshold to 2.

The retry behavior for the MQInput node is as follows:

  1. The MQInput node reads the message, and checks the BackoutCount against the backout threshold.
  2. If the backout threshold is not reached, the MQInput node attempts to get the message from the queue again. If the attempt fails, it is handled as an internal error. If the message processing succeeds on the Out terminal, the MQInput node propagates the message to the output flow.

    Typically, the backed out message is read again by the same MQInput node. However, depending on your WebSphere MQ queue manager configuration, it is possible for an MQInput node in a different message flow, or another WebSphere MQ application, to read the same input queue and get the backed-out message.

    If you want to control or prevent roll back, you can use the Catch terminal, as described in Handling errors in message flows.

  3. If the backout threshold is reached:
    • If a Failure terminal is connected, the MQInput node propagates the message to the fail flow. You must handle the error on the fail flow that is connected to the Failure terminal.

      When a message is propagated through the Failure terminal, the exception list does not contain the exceptions that occurred in the flows that are connected to the Out or Catch terminals. The exception list contains new exceptions only, which describe the reason that the message went through the Failure terminal (for example, that the backout threshold was reached).

      The MQInput node re-reads the message from the input queue before it is propagated through the Failure terminal. Therefore, the exception list does not contain the exceptions that occurred in the flows that are connected to the Out or Catch terminals. The exception list contains new exceptions only, which describe the reason that the message went through the Failure terminal (for example, that the backout threshold was reached).

    • If a Failure terminal is not connected, the MQInput node attempts to put the message on an available queue, in order of preference:
      1. If a backout queue name is defined for the input queue (queue attribute BOQNAME), the message is put on the backout queue.
      2. If the backout queue is not defined, or it cannot be identified by the MQInput node, the message is put on the dead letter queue (DLQ), if a DLQ is defined. The MQDLH PutApplName property is set to WebSphereMQIntegrator and appended with the broker major version number, for example: WebSphereMQIntegrator9
      3. If the message cannot be put on either of these queues because there is an MQPUT error (because the queue does not exist, or the queues cannot be identified by the MQInput node), the message cannot be handled safely without risk of loss.

        The message cannot be discarded, therefore the message flow continues to attempt to backout the message. It records the error situation by writing errors to the local error log. A second indication of this error is the continual increase of the BackoutCount value of the message in the input queue.

        If this situation occurs because neither queue exists, you can define one of the queues to resolve the problem. If the condition that prevented the message from being processed is resolved, you can temporarily increase the value of the BOTHRESH attribute, which forces the message through normal processing.

  4. If the backout threshold value is reached twice in the Failure terminal, the MQInput node attempts to put the message on an available queue, in the order of preference that is defined in step 2.

    When a backed out message is read, the MQInput node can put the message to the backout queue or the DLQ. The BackoutCount value is checked when the message is received from the input queue. If the backout threshold is exceeded, the message is backed out immediately with no further processing performed on the message. The backout operation occurs in a separate transaction from previous processing failures, and the message is not parsed or validated during the backout transaction. The backout transaction generates its own set of monitoring events. Therefore, information that is obtained through message parsing, such as the exceptionList, might not be available.

Handling message group errors

WebSphere MQ supports message groups. You can specify that a message belongs to a group and its processing is then completed with the other messages in the group (that is, either all messages are committed or all messages are rolled back). When you send grouped messages to an integration node, this condition is upheld if the message flow is configured correctly, and errors do not occur during group message processing.

To configure the message flow to handle grouped messages correctly, review Receiving messages in a WebSphere MQ message group. However, if an error occurs while one of the messages is being processed, the message group might not be processed correctly.

If the MQInput node is configured as described, all messages in the group are processed in a single unit of work, which is committed when the last message in the group is successfully processed. However, if an error occurs before the last message in the group is processed, the unit of work that includes the messages, up to and including the message that generates the error, is subject to the error handling defined by the rules that are documented here, which might result in the unit of work being backed out.

Any of the remaining messages in the group might be successfully read and processed by the message flow, and therefore are handled and committed in a new unit of work. The unit of work is committed when the last message is processed. Therefore, if an error occurs in a group, but not on the first or last message, it is possible that part of the group is backed out and another part is committed.

Depending on the type of message processing that you require, you might need to provide extra error handling for message groups. For example, you might record the failure of the message group in a database, and include a check on the database when you retrieve each message. This check would force a rollback if the current group already encountered an error. This error handling configuration would ensure that the whole group of messages is backed out and not processed unless all are successful.