IBM Support

Performance problem with CICS Web Services

Troubleshooting


Problem

You are analyzing performance data from CICS Web transactions and notice that the response time is longer than you expect. This could be while using CICS Web Support or CICS Web Services, each with or without Secure Sockets Layer (SSL).

Symptom

Slower than expected performance with CICS Web Services

Cause

The delay is due to the TCP/IP setting DELAYACKS waiting for an acknowledgment.

Diagnosing The Problem

While analyzing performance data for Web based transactions, you notice that if you add up the total response time for all of the transactions involved in the process, there is roughly a .5 second difference in the total response time and the response time for the client transaction. In the example below the total response time for the four transactions CWXN, CWXN, CPIH and FXS1 is 0.337, yet the response time for FXC1 is 1.0082, which is a difference of 0.6712 seconds. There would be some transaction time for FXC1 processing, but you expect this time to be relatively small, so there is roughly .5 or .6 of a second that you cannot explain. You expect the response time for FXC1 to be close to the sum of all the other transactions that need to complete before FXC1 can end. Note that this is not always a good assumption, but in this case it did uncover the DELAYACKS problem.

In the discussion for this item, we will focus on a CICS Web Services transaction flow without SSL. But similar results can be seen for a CICS Web Support flow. Adding SSL flows to either scenario also shows similar results. In this example, the FXC1 transaction is entered from a terminal to start the flow. The FXC1 starts the process as a client and goes outbound to the same CICS (same LPAR of course) and the FXS1 (server) transaction runs and returns the result to the FXC1 client. So in this case, the same CICS is acting as both the client and the server. You believe that the time for task 286 CWXN, 287 CWXN, 288 CPIH, and 289 FXS1 should be closer to the total response time for FXC1. It looks like the extra time must be in TCP/IP or the network somewhere.

A review of CICS trace shows 3 SOCKET RECEIVEs that take more time than expected. The performance information (that you gather from a monitoring product) looks like the following (this is for a CICS Web Services flow without SSL):


DATE  TIME(end) TRANID TASK# TERMID RESP    CPU
10/27 11:15:39  CWXN   233          0.0049  0.0020
10/27 11:15:40  CWXN   234          0.3006  0.0016
10/27 11:15:40  FXS1   236          0.0027  0.0018
10/27 11:15:40  CPIH   235          0.0288  0.0231
10/27 11:15:40  FXC1   232   X934   1.0082  0.0349

There are three segments of time when the RECEIVE took a long time - noted in our trace by the * by the time. The TCP/IP or UNIX System Services (USS) perspective in the CTRACE of when they received the SEND for the outgoing data and when they gave it to CICS for the RECEIVE is the key.

Looking more closely (from trace) at the 3 RECEIVEs:


Case 1
233 SENDs 18            11:15:39.9520799375 00.0000017656
232 socket RECEIVE wait 11:15:39.9522577656 00.0000033281
233 SENDs 57            11:15:39.9525804697 00.0000028281
232 exit RECEIVE wait   11:15:40.2331621093 00.2796516240*
232 received 6F=18+57   11:15:40.2332788281 00.0000057656
Timeframe of interest:  11:15:39.9522577656 11:15:40.2331621093

                                   
Case 2
232 SENDs 34E           11:15:40.2362710625 00.0000020000
234 socket RECEIVE wait 11:15:40.2367593906 00.0000356250
234 exit RECEIVE wait   11:15:40.5341173447 00.2973579541*
234 received 34E        11:15:40.5342929541 00.0000055156
Timeframe of interest:  11:15:40.2362710625 11:15:40.5341173447


Case 3
235 SENDs 24C           11:15:40.5623532656 00.0000021093
232 socket RECEIVE wait 11:15:40.5628594687 00.0000033125
232 exit RECEIVE wait   11:15:40.8344376879 00.1696457187*
232 received 24C        11:15:40.8345477036 00.0000387343
Timeframe of interest:  11:15:40.5623532656 11:15:40.8344376879

TCP/IP analysis of a CTRACE taken during these times showed that the delays noted in the trace correspond with delayed acknowledgments (200ms). The recommendation is to code NODELAYACKS on the TCPCONFIG statement in the TCP/IP PROFILE.

Testing was done again in this particular environment by setting NODELAYACKS globally in the TCPCONFIG statement in the TCP/IP PROFILE. It showed the following improvements.


                              OLD(delayacks)   NEW(nodelayacks)
                              Response CPU     Response CPU
CICS Web Services without SSL 0.9042   0.0018  0.0888   0.0016
CICS Web Support without SSL  0.8484   0.0019  0.0271   0.0014
CICS Web Services with SSL    1.6228   0.0020  0.1344   0.0015
CICS Web Support with SSL     1.5414   0.0018  0.0606   0.0016

Resolving The Problem

Change the TCP/IP setting to NODELAYACKS on the TCPCONFIG statement in the TCP/IP PROFILE. This should significantly reduce the total response time for the transactions and also reduce total CPU by a fraction. Alternatively, if you have DELAYACKS specified (default) on the TCPCONFIG statement, you can specify NODELAYACKS on specific PORT(s) as an override.

The NODELAYACKS setting is discussed in the z/OS Communications Server IP Configuration Reference:

Specifying the NODELAYACKS parameter on the TCPCONFIG statement overrides the specification of the DELAYACKS parameter on the TCP/IP stack PORT or PORTRANGE profile statements for the port used by a TCP connection, or on any of the following statements used to configure the route used by a TCP connection:

  • The TCP/IP stack BEGINROUTES or GATEWAY profile statements
  • The Policy Agent RouteTable statement
  • The OMPROUTE configuration statements

...(DELAYACKS) is the default, but the behavior can be overridden by specifying the NODELAYACKS parameter on the TCP/IP stack PORT or PORTRANGE profile statements for the port used by a TCP connection, or on any of the following statements used to configure the route used by a TCP connection:
  • The TCP/IP stack BEGINROUTES or GATEWAY profile statements
  • The Policy Agent RouteTable statement
  • The OMPROUTE configuration statements

So DELAYACKS on the TCPCONFIG can be overridden at the PORT level with NODELAYACKS, but if you specify NODELAYACKS on the TCPCONFIG, it will be global and cannot be overridden for a PORT.

[{"Product":{"code":"SSGMGV","label":"CICS Transaction Server"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Component":"Web Services","Platform":[{"code":"PF035","label":"z\/OS"}],"Version":"4.2;4.1;3.2;3.1","Edition":"","Line of Business":{"code":"LOB35","label":"Mainframe SW"}}]

Product Synonym

CICS/TS CICS TS CICS Transaction Server

Document Information

Modified date:
15 June 2018

UID

swg21250026