CICS MRO, ISC, and IPIC: performance and tuning

Multiregion operation (MRO), intersystem communication over SNA (ISC over SNA), and IP interconnectivity (IPIC) connections enable CICS® systems to communicate and share resources with each other. Performance is influenced by the intercommunication facilities that you use with the connection and by your management of the connection.

These CICS intercommunication facilities are available using MRO, and ISC over SNA, and IPIC connections:
  • Function shipping
  • Distributed transaction processing
  • Asynchronous processing
  • Transaction routing
  • Distributed program link
For descriptions of the CICS intercommunication methods and facilities, see Getting started with intercommunication.

CICS ISC/IRC statistics show the frequency of use of intercommunication sessions and mirror transactions. The z/OS® Communications Server SNA trace, an SVC trace, and RMF give additional information.

If each transaction makes a number of intercommunication requests, function shipping generally incurs the most processor usage. The number of requests per transaction that constitutes the break-even point depends on the nature of the requests.

Both distributed transaction processing (DTP) and asynchronous processing are, in many cases, the most efficient facilities for intercommunication because a variety of requests can be batched in one exchange. DTP, however, requires an application program specifically designed to use this facility. For information about designing and developing DTP, see Concepts and design considerations.

Transaction routing, in most cases, involves one input and one output between systems, and the additional processor usage is minimal.

MRO

Multiregion operation (MRO), in general, causes less processor usage than intersystem communication (ISC) because the SVC pathlength is shorter than that through the multisystem networking facilities of SNA. CICS MRO provides a long-running mirror transaction and fastpath transformer program to further reduce processor usage.

Ensure that you have a sufficient number of MRO sessions defined between the CICS systems to take your expected traffic load. The increased cost in real and virtual storage is minimal, and task life is reduced, so the probable overall effect is to save storage. Examine the ISC/IRC statistics (see ISC/IRC system and mode entry statistics) to ensure that no allocates have been queued; also ensure that all sessions are being used. However, the definition of too many MRO sessions can unduly increase the processor time used to test their associated ECBs.

If you want only transaction routing with MRO, the processor usage is relatively small. The figure is release- and system-dependent (for example, it depends on whether you are using cross-memory hardware), but you can assume a total cost somewhere in the range of 15 - 30 KB instructions per message pair. This is a small proportion of most transactions, commonly 10% or less. The cost of MRO function shipping can be very much greater, because typically each transaction has many more inter-CICS flows. The cost depends greatly on the disposition of resources across the separate CICS systems.

MRO can affect response time as well as processor time. Delays occur in getting requests from one CICS system to the next. These delays arise because CICS terminal control in either CICS system has to detect any request sent from the other, and then has to process it. In addition, if you have a uniprocessor, MVS™ has to arrange dispatching of two CICS systems and that must imply extra WAIT/DISPATCH processor usage and delays.

Specify the system initialization parameter MROLRM=YES if you want to establish a long-running mirror task. This saves re-establishing communications with the mirror transaction if the application makes many function shipping requests in a unit of work.

When you use MRO, you can eliminate some processor usage for SVC processing with the use of MVS cross-memory services. Cross-memory services use the MVS common system area (CSA) storage for control blocks, not for data transfer, which can also be a benefit. Note, however, that MVS requires that an address space using cross-memory services be nonswappable.

ISC

For situations where ISC is used across MVS images, consider using XCF/MRO. CICS uses the MVS cross-system coupling facility (XCF) to support MRO links between MVS images for transaction routing, function shipping, and distributed program link. You can also use XCF/MRO for distributed transaction processing, if the LU6.1 protocol is adequate for your purpose. XCF/MRO consumes less processor resources than ISC.

You can prioritize ISC mirror transactions. The CSMI transaction is for data set requests, CSM1 is for communication with IMS systems, CSM2 is for interval control, CSM3 is for transient data and temporary storage, and CSM5 is for IMS DB requests. If one of these functions is particularly important, you can prioritize it over the rest. This prioritization is not effective with MRO because any attached mirror transaction services any MRO request while it is attached.

If ISC facilities tend to flood a system, you can control them with the SNA VPACING facility. Specifying multiple sessions (SNA parallel sessions) increases throughput by allowing multiple paths between the systems. With CICS, you can specify an SNA class of service (COS) table with LU6.2 sessions, which can prioritize ISC traffic in a network.

Interregion communication performance costs with MRO and ISC

Using the tables in these topics, you can compare the relative processing times of particular CICS API calls, and examine some of the other factors that affect overall processing times. These tables can help you make decisions concerning application design when you are considering performance. To calculate a time for a transaction, find the entries appropriate to your installation and application, and add their values together.

Before you work with these numbers, be aware of the following considerations:
  • The cost per call is documented in 1 K or millisecond instruction counts taken from a tracing tool used internally by IBM®. Each execution of an instruction has a count of 1. No weighting factor is added for instructions that use more machine cycles than others.
  • Because the measurement consists of tracing a single transaction within the CICS region, any wait, for example a wait for I/O, results in a full MVS WAIT. This cost has been included in the numbers reported in this document. On a busy system the possibility of taking a full MVS WAIT is reduced because the dispatcher has a higher chance of finding more work to do.
  • When judging performance, the numbers in this information should not be compared with those published previously, because a different methodology has been used.

Transaction routing performance costs

MRO XM MRO XCF (through CTC) MRO XCF (through CF) ISC LU6.2
37.0 43.0 66.0 110.0

Function shipping performance costs (MROLRM=YES)

Type MRO XM MRO XCF (through CTC) MRO XCF (through CF)
Initiate/terminate environment 13.2 13.2 13.2
Each function shipping request 9.0 23.4 48.4
Sync point flow 9.0 23.4 48.4
Notes:
  • These costs relate to CICS systems with long-running mirrors.
  • ISC LU6.2 does not support MROLRM=YES.
  • The cost of session allocation, initiation of the mirror transaction, stopping the mirror transaction, and session deallocation is included in the initiate/terminate environment.

For example, if you migrate from a local file access to MRO XM and request 6 function ships per transaction, the additional cost is calculated as follows:

13.2(Initiate/End) + (6(requests)*9.0(Request cost)) + 9.0(Sync point) = 76.2

Function shipping performance costs (MROLRM=NO)

Without long-running mirrors, each function ship read request incurs the cost of session allocation and mirror initialization and termination. However, the first change to a protected resource (for example, a READ UPDATE or a WRITE) causes the session and mirror to be held until a sync point.
MRO XM MRO XCF (through CTC) MRO XCF (through CF) ISC LU6.2
21.4 35.0 59.9 115.0

IPIC

The CICS-supplied mirror program DFHMIRS is defined as a threadsafe program. For supported CICS facilities, over IPIC connections only, the remote CICS region uses a threadsafe mirror transaction and runs the request on an L8 open TCB whenever possible. For threadsafe applications that issue commands for functions on remote CICS systems using IPIC connections, the reduction in TCB switching improves application performance compared to other intercommunication methods. The use of open TCBs also provides significant potential throughput improvements between CICS regions.

For some applications, the performance benefits of using long-running mirrors can also be significant. IPIC supports the MIRRORLIFE attribute of the IPCONN, which can improve efficiency and provide performance benefits by specifying the lifetime of mirror tasks and the amount of time a session is held.

IPIC supports threadsafe processing for the LINK command between CICS TS 4.2 or later regions. If you are using a threadsafe program that makes DPL requests that are transmitted to another region using IPIC connections, you might benefit from improved performance by changing your dynamic routing program to be coded to threadsafe standards. IPIC supports the following DPL calls:
  • Distributed program link (DPL) calls between CICS TS 3.2 or later regions.
  • Distributed program link (DPL) calls between CICS TS and TXSeries® Version 7.1 or later.

Function shipping file control, transient data, and temporary storage requests over an IPIC connection provides CICS application programs with the ability to run without regard to the location of the requested resources. Function shipping of file control and temporary storage requests using IPIC connections is threadsafe between CICS TS 4.2 or later regions. Function shipping of transient data requests using IPIC connections is threadsafe between CICS TS 5.1 or later regions. Any global user exit programs that are called in the remote CICS region for file control, transient data, and temporary storage requests must be enabled as threadsafe programs for the best performance.

For file control requests that are function shipped using IPIC connectivity, to gain the performance benefits of the open transaction environment, you must specify the system initialization parameter FCQRONLY=NO in the file-owning region.