Back-end processing of APPC or OTMA transaction messages in an IMSplex with shared queues

In an IMSplex with shared queues, you can use remote MSC transactions to route APPC or OTMA messages to back-end IMS systems for processing.

Using remote MSC transactions avoids the local affinity restrictions that are otherwise imposed on transaction messages received from APPC or OTMA clients. Any IMS system in the IMSplex that defines the transaction as a local MSC transaction that is assigned to a region can then process the transaction.

If you are migrating from an MSC network to an IMSplex, you can use existing remote MSC transactions to process APPC or OTMA messages on back-end IMS systems in the IMSplex and bypass the APPC and OTMA affinity restrictions. When the transactions are converted to local IMSplex transactions, the APPC and OTMA affinity restrictions still apply unless you specify RRS=Y and AOS=Y.

When an APPC or OTMA message is queued in a front-end IMS system to a transaction that is defined as a remote MSC transaction, IMS inserts the transaction message to the shared queue without affinity to any IMS system.

When a back-end IMS system retrieves APPC and OTMA transaction messages from the shared queue, it saves an APPC or OTMA conversation token in the message prefix and processes the transaction message independently from, and asynchronous to, the APPC or OTMA conversation. IMS does not propagate or cascade the APPC or OTMA conversation to the back-end IMS system or to any other IMS systems, including the front-end IMS system, that might process APPC or OTMA messages after a program-to-program switch.

Note: If an OTMA transaction that is processing independently from the OTMA conversation issues a CHNG call to a modifiable PCB and IMS calls the OTMA Destination Resolution user exit (OTMAYPRX), the OTMAYPRX user exit recognizes the transaction message as an OTMA message only if the transaction is processing within the IMSplex where the OTMA client originally submitted the transaction.

When the originating front-end IMS system receives the first or only response, the response is returned to the client in APPC or OTMA synchronous mode, assuming that the client is still connected in synchronous mode. IMS returns any subsequent responses to the client for the same interaction asynchronously. If the client has terminated before the first or only response is returned, the response is not discarded, but instead queued to the client asynchronously.

For example, if you define an MSC transaction as remote in a front-end IMS system without assigning the transaction to a started MSC link and then you define the same MSC transaction as local in the back-end IMS and assign the transaction to a started region, when an APPC or OTMA client initiates the transaction on the front-end IMS system in synchronous mode, the front-end IMS system saves a token in the message prefix to identify the client and places the transaction on the shared queue without any affinity to the front-end IMS system.

When the back-end IMS system retrieves the transaction from the shared queue, the transaction is processed asynchronously and disassociated from the APPC/OTMA client.

The following series of figures show various possible scenarios in which remote MSC transactions can be used to process APPC or OTMA transactions on the back end in an IMSplex, as well as outside of an IMSplex by using an MSC link.

The following figure shows a simple configuration in which a remote MSC transaction is used to process an APPC or OTMA transaction on the back end in an IMSplex. The transaction TRAN1 is defined to IMS 1 as a remote MSC transaction, but because TRAN1 is not assigned to an MSC link in IMS 1, IMS 1 queues it to the shared queue.

After submitting the transaction to IMS 1, the APPC or OTMA client waits in a receive state for the synchronous response; however, the queuing and processing of TRAN1 by IMS 1 and IMS 2 is handled by IMS independently from the synchronous communications mode that is maintained with the client.

Figure 1. Using a remote MSC transaction to route an APPC or OTMA transactions to a back-end IMS system
The flow of an APPC or OTMA transaction is shown from the client, to IMS 1, to the shared queue as a remote MSC transaction, and to IMS 2. The response is queued to the shared queue by IMS2, and returned to the client by IMS1.

The following figure shows another scenario in which a remote MSC transaction is used to route APPC or OTMA transaction. This scenario involves three IMS systems and TRAN1 is routed to a remote MSC IMS system by way of both a shared queue and an MSC link.

When IMS 1 receives the transaction from the client, IMS 1 queues it to the shared queue. In IMS 2 the transaction TRAN 1 is defined as a remote MSC transaction and assigned to the MSC link to IMS 3. IMS 3 generates the response, which is then returned across the MSC link to IMS 2, and then returned to IMS 1 by way of the shared queue. IMS 1 returns the response to the APPC or OTMA client in the synchronous mode expected by the client.

Figure 2. Routing an APPC or OTMA transaction across a shared queue and an MSC link
The figure is described in the text preceding the graphic.

In the scenario shown in the following figure, the APPC or OTMA transaction TRAN1 is routed to IMS 2, a back-end IMS system in a shared queues group, where a program-to-program switch to TRAN2 sends processing back to IMS 1 by way of the shared queue. TRAN2 is not an MSC transaction. TRAN2 then sends processing back to IMS 2 by issuing another program-to-program switch to TRAN3, also a non-MSC transaction. TRAN3 is queued to the shared queue and retrieved by IMS 2, where TRAN3 processes and generates a response for the client. The response is returned to IMS 1 by way of the shared queue. IMS 1 returns it to the client in the synchronous mode expected by the client.

Figure 3. Routing an APPC or OTMA transaction by using a remote MSC transaction and shared queues with multiple program-to-program switches
The figure is described in the preceding text.

The scenario shown in following figure involves three IMS systems. IMS 1 routes TRAN1 to IMS 2 as a remote MSC transaction by way of the shared queue. TRAN1 processes locally on IMS 2 and issues a program-to-program switch to TRAN2, which is defined as a remote MSC transaction on IMS 2. IMS 2 sends TRAN2 to IMS 3 across an MSC link. TRAN2 is processed by IMS 3 and generates the response for the client. IMS 3 returns the response to IMS 2 across the MSC link. IMS 2 queues the response to the shared queue. IMS 1 retrieves the response and returns it to the APPC or OTMA client in the synchronous mode the client expects.

Figure 4. APPC or OTMA transaction routed using MSC and shared queues with a program-to-program switch
The figure is described in the preceding text.