Connection pool management for connections to CICS Transaction Gateway

This article gives an overview of using connection pool management to improve performance, security, availability, and scalability when using IBM CICS Transaction Gateway (CICS TG) to connect to IBM CICS Transaction Server (CICS TS). The article shows you how to use JCA Adapters when connecting to J2EE application servers, but focuses mainly on how to design and implement connection pool management for traditional, non-J2EE applications.

Share:

Dongsheng Zhou (dshzhou@cn.ibm.com), Staff Software Engineer , IBM

Photo of  Dongsheng ZhouDongsheng Zhou is a Staff Software Engineer at the IBM China Software Development Laboratory. He has a Masters degree in Computer Science master has five years of experience in design, implementation, and support for IBM software products on z/OS. His focus areas include CICS Transaction Server, CICS Transaction Gateway, and C/C++ and COBOL programming. You can contact Dongsheng at dshzhou@cn.ibm.com.



Kai Cai (caikai@cn.ibm.com), Advisory Software Engineer, IBM China

Photo of Kai CaiKai Cai is a Advisory Software Engineer at the IBM China Software Development Laboratory. He has a Masters degree in Computer Science master has 14 years of experience in design, implementation, and support for IBM software products on z/OS. His focus areas include CICS Transaction Server, CICS Transaction Gateway, WebSphere MQ, and Java and C/C++ programming. You can contact Kai at caikai@cn.ibm.com.



03 October 2012

Introduction

IBM® CICS® Transaction Gateway (CICS TG) provides access to IBM CICS Transaction Server (CICS TS) while improving performance, security, availability, and scalability. CICS TG receives application transaction requests and passes them to a selected CICS TS server based on load balancing and dynamic server selection policy. When CICS TG passes the transaction request to CICS TS, it wraps the request data either in a COMMAREA or a Container/Channel, and sends it over TCP/IP, EXCI, or IPIC connections to the CICS TS servers. CICS TG then returns the transaction processing result to the customer application in either synchronous or asynchronous mode.

CICS TG offers a standard application programming interface (API), the External Call Interface (ECI), for customer applications. The ECI supports programming languages such as Java™/J2EE, C/C++, and Microsoft® .NET® (including Visual Basic and C#). When a customer application requests a CICS transaction via CICS TG, it sets up a connection to CICS TG, then calls the ECI to pass the request, and deletes the connection after the transaction is completed. ECI works fine for low transaction volumes, but when transaction volumes increase, setting up and deleting the connection for each transaction creates too much system overhead and can greatly reduce transaction performance. Therefore, in scenarios with high concurrency and high transaction volume, CICS TG needs to use the connection pool mechanism to manage the connections and avoid the repeated connection creation and deletion.

For J2EE applications, CICS TG offers the CICS Adapter to provide connection pool management by leveraging J2EE server services. Since CICS Adapter follows J2EE Connector Architecture (JCA) standards and runs in J2EE applications like WebSphere® Application Server, the J2EE servers can provide the connection pool management service for the Adapter connecting to CICS TG. The J2EE application simply calls the JCA adapter interface for transaction requests, and retrieves the result from the same call. It does not need to worry about connection maintenance to CICS TG.

But there is no similar connection pool management feature available for non-J2EE applications, and CICS TG does not include the components to support connection pool management between applications and CICS TG, so customer applications must implement the connection pool management feature. This article takes C/C++ applications as an example and shows you how to implement connection pool management between the non-J2EE applications and CICS TG. The design and implementation methods are applicable for other non-J2EE languages such as Visual Basic and C#.

CICS Transaction Server

CICS TS is an advanced mainframe transaction processing system for z/OS and System z, providing high availability, reliability, and scalability for critical enterprise transaction processing with very low cost per transaction.

Customer applications

In this article, customer applications:

  • Do not run in CICS TS regions
  • Send transaction requests to CICS TS by calling CICS TG interfaces, and receive transaction results from CICS TG interfaces
  • Can be implemented in various programming languages, including Java, C/C++, Visual Basic, or C#

CICS Transaction Gateway

CICS TG provides the access to CICS TS which has strong security, high performance, high availability and scalability. It is market-leading connector to CICS and has been proven in over a thousand customers for enterprise modernization of CICS assets.

CICS Transaction Gateway advantages

Compared to other CICS access options, CICS TG has the following advantages:

  • Customer applications connect to z/OS directly without intervening protocol conversion between TCP/IP and SNA. CICS TG thus reduces call path length, improving efficiency and reducing points of failure.
  • CICS TG can use External CICS Interface (EXCI) and IP interconnectivity (IPIC) to connect with CICS TS, which are more efficient than TCP/IP connections. When using IPIC, CICS TG can offload the Java workload to IBM System z Application Assist Processor ( zAAP), improving the efficiency of CP utilization and reducing the MIPS costs.
  • You can create a high availability group with multiple CICS TGs for load balancing, and select the most suitable CICS TS based on a specified Dynamic Server Selection (DSS) policy. In addition, the high availability group exposes the sole access point (IP address and port) to the customer application, so customer applications are not impacted when a CICS TG high availability group is scaled up by adding new gateway daemons.
  • CICS TG provides APIs for various programming languages.
  • CICS TG enables customer applications to use containers and channels for passing data, which avoids the 32K byte limitation imposed by COMMAREA maximum length.
  • CICS TG supports both synchronous and asynchronous calls, for single-thread and multi-thread scenarios respectively.
  • CICS TG can use connection pools to improve communication efficiency between customer applications and CICS TS.

CICS Transaction Gateway deployment

CICS TG supports various operating systems, hardware platforms, and topologies, so CICS TG deployment is quite flexible. Here is a simplified deployment diagram:

Figure 1. CICS TG deployment
CICS TG deployment

In Figure 1:

  • J2EE applications -- Run on J2EE server, and calls JCA Adapters to connect with CICS TG on TCP/IP connections. J2EE applications can run on either the same or a different machine from CICS TG. For J2EE applications, the J2EE server offers connection pool management for connection with CICS TG, so there is no need for other connection pool management components.
  • Local applications -- Run on the same machine with CICS TG, and call CICS TG local libraries for communication with CICS TG. Local mode has the highest communication efficiency, and there is no need for connection pool management between the local applications and CICS TG.
  • Remote applications -- Run on machines other than the one running CICS TG, and connect with CICS TG via TCP/IP or SSL connections. Deployment is quite flexible, and remote applications need connection pool management to improve communication efficiency between customer applications and CICS TG.

In production environments, to achieve the deployment flexibility, you can choose either J2EE applications or remote applications. However, over time, many CICS TS customers have developed traditional, non-J2EE applications that cannot run in J2EE application servers, so remote application mode is the only choice to reuse these traditional applications. Therefore they cannot use JCA to leverage the connection pool management provided by a J2EE application server, and must implement connection pool management between customer applications and CICS TG.

This article takes C/C++ remote applications as examples, describes key considerations in connection pool management design, and provides sample code for implementation.

CICS Transaction Gateway application programming interfaces (APIs)

CICS TG provides multiple APIs:

  • External Call Interface (ECI) -- Enables customer applications to call CICS TS transactions.
  • External Security Interface (ESI) -- Enables customer applications to perform security-related tasks such as querying or changing user IDs and passwords.
  • External Presentation Interface (EPI) -- Available only for CICS TG on distributed platforms. It simulates a 3270 terminal in CICS TS, and enables customer applications to call traditional 3270 transactions in CICS TS.

ECI APIs used in this article:

CTG_openRemoteGatewayConnection

Description: Create the connection to CICS TG for remote applications.

Argument list:

  • char * address: The address of the CICS TG machine. It can be either the machine name or IP address.
  • int port: The port on which the CICS TG listens for the incoming request. The default value is 2006.
  • CTG_ConnToken_t * gwTokPtr: The returned token which refers to the connection created.
  • int connTimeout: The timeout value in second for the connection.

CTG_closeGatewayConnection

Description: Release the connection to the CICS TG for the remote applications.

Argument list:

  • CTG_ConnToken_t * gwTokPtr: The token of the connection to be released.

CTG_ECI_Execute

Description: Send ECI request to CICS TG.

Argument list:

  • CTG_ConnToken_t gwTok: The connection token between the applications and CICS TG.
  • CTG_ECI_PARMS * EciParms: The data structure which contains the calling parameters, including info like CICS transaction ID, program ID, COMMAREA or channel, user ID/password, etc.

CTG_getRcString

Description: Get the description of the return code.

Argument list:

  • int returnCode: The return code to get the description.
  • char * rcString: Buffer to retrieve the description of the return code.

For complete reference information on the ECI APIs, see "External Call Interface (ECI)" in the CICS Transaction Gateway for Multiplatforms V8.1 information center or the CICS Transaction Gateway for z/OS V8.1 information center.

About connection pool management

Figure 2 shows connection pool management deployed on the local machine with the applications and connected to CICS TG via remote mode, on a Linux platform:

Figure 2. Connection pool management deployment
Connection pool management deployment

In a production environment, multiple applications call the connection pool concurrently, so the following connection pool design considerations apply:

  • Use the connection pool manager model as a dynamic link library, with only one instance for multiple application processes. This technique makes system resource utilization more efficient.
  • Use concurrent control when accessing the connection pool, to ensure that high concurrent and multiple tasks can call the connection pool simultaneously.
  • Provide error endurance, to prevent unexpected results from unusual calls, such as impacts on invoke and even abnormal exit from the module.
  • Provide logging to facilitate system maintenance and troubleshooting.

Using the connection queue

There are a series of connections between applications and CICS TG in the connection pool. It is managed by a queuing structure, with the queues are supported by different data types in different programming languages, such as link structure in C/C++ or LinkedList in Java. The queuing structure uses FIFO, so when an application needs a connection from the connection pool, it takes the first connection at the top of queue, and will the connection is returned from the application, it is inserted at the end of the queue. Using C/C++ link structure as an example, a connection pool could be defined like this:

Example 1. Defining the node of the connection queue
typedef struct node //define link structure
{
   CTG_ConnTken_t data;
   struct node*next;
}node,*link;

Figure 3 shows how the connection pool works:

Figure 3. How connection pool management works
How connection pool management works

Synchronizing connection pool access

Different programming languages use different designs to support synchronization. Java programs, which run in a JVM independent from the OS, use the keyword synchronize to define the function, and use wait()/notify() to wait for the signal. OS-dependent programming languages like C/C++ can call the lock function embedded in the OS for access synchronization of the connection pool. Below is an example using C/C++ on Linux to show access synchronization via the lock function provided by OS:

Example 2. Using critical sections to synchronize connection pool access
pthread_mutex_t mutex;
pthread_mutex_init(&mutex, NULL);
pthread_mutex_lock(&mutex);
// lock
// logic start
. . . 
. . . 
. . . 
// logic end
pthread_mutex_unlock(&mutex);
// unlock

Error endurance of connection pool

Error endurance enhances availability and health of the connection pool. Error endurance design considerations include:

  • If the connections in the connection pool are all in use and no connection is available for any new application access requests, the connection pool should queue up the application requests and wait for existing connections to be released returned to the connection pool. The connection pool can use semaphore synchronization to queue requests and wait for connections to be released:
    1. The new application hangs and waits for the signal that the queue is not empty.
    2. When an application finishes and starts to release its connection back to the connection pool, it triggers a signal that the queue is not empty.
    3. The application on top of the waiting queue detects that signal, continues to execute, obtains the connection, and processes the transaction.
    4. If the hanging application does not receive that signal before a timeout interval expires, then it generates a message that the transaction timed out.
  • The essential part of the connection pool program, such as the get and return connections that manipulate the connection pool queuing, need to be considered regarding exception capture and handling to avoid ABENDs, such as when using a try/catch/finally block in Java or setjmp/longjump in C.

Logging management of connection pool

Logging records key information during the execution process of the connection pool for system maintenance and troubleshooting. Due to the high concurrency applications accessing the connection pool, more than one application may try to update the log simultaneously, so log management design should provide a synchronization mechanism. Here is some key information that should go in the log file:

When the connection pool is set up:

  • Connection pool setup successful:
    • Timestamp of setup
    • Content of connection token
  • Connection pool setup failed:
    • Timestamp of failure
    • Error code
    • Error message

When the connection is obtained from the connection pool:

  • Connection obtained successfully:
    • Timestamp when connection obtained
    • Content of connection token
  • Connection failed:
    • Timestamp of failure
    • Error code
    • Error message

When the connection is returned to the connection pool:

  • Connection returned successfully:
    • Timestamp of return
    • Content of connection token
  • Connection return failed:
    • Timestamp of return
    • Error code
    • Error message

When the connection pool is closed:

  • Connection pool closed successfully:
    • Timestamp of close
    • Content of connection token
  • Connection pool close failed:
    • Timestamp of close
    • Error code
    • Error message

Implementing connection pool management

The connection pool provides the calling interface to applications for tasks such as initializing the connection pool and getting, returning, and closing the connections.

Initializing connection pool

int initializePool(char* hostname,int port,int connTimeout)
Example 3. Setting up connection queue
for (i = 1;i <= MAX_CONNECTION;i++) {
   CTG_ConnToken_t gatewayToken;
   gatewayToken = NULL;
   rc = openGatewayConnection(&gatewayToken, hostname, port, connTimeout);
   if (rc == RC_OK){
      pnode = (link)malloc(sizeof(node));
      pnode->data = gatewayToken;
      pnode->next = NULL;
      qnode->next = pnode;
      qnode = pnode;
   }
}

Set up a specific number of connections from applications to CICS TG, as specified in the configuration file.

Argument list:

  • char * hostname: CICS TG Machine name, domain name, IP address
  • int port: CICS TG listener port, default is 2006
  • int connTimeout: Connection overtime set, specified as second

Return code 0 -- Connection pool set up successfully.

Getting available connections from the pool

int getConnection(CTG_ConnToken_t* gatewayTokenOutPtr)
Example 4. Getting the connection token for the connection to CICS TG
if (pnode != NULL && pnode->data != NULL) {
   *gatewayTokenOut = pnode->data;
   head->next = pnode->next;
   free(pnode);
}
  • The customer application should get an available connection token from the connection pool.
  • Find the first non-empty element from the queue in the thread pool, which must contain an available connection token. This token will be delivered to an application to use and then be removed from the queue.

Argument list:

  • CTG_ConnToken_t * gatewayTokenOutPtr: token of the returned connection

Return code 0 -- Available connection token obtained successfully.

Figure 4 shows how to obtain an available connection from the connection pool:

Figure 4. Getting available connections from the pool
Getting available connections from the pool

Returning connections to the connection pool

int returnConnection(CTG_ConnToken_t gatewayTokenIn)
Example 5. Returning connection token to connection pool after transaction completion
link pnode,qnode,lastnode;
char * str = NULL;
qnode = (link)malloc(sizeof(node));
qnode->data = gatewayTokenInPtr;
qnode->next = NULL;

pnode = head->next;
lastnode = head;

while (pnode != NULL) {
   lastnode = pnode;
   pnode = pnode->next;
}
lastnode->next = qnode;
  • The customer application returns the connection token to the thread pool when it is not needed.
  • First, find the last element in the queue of the connection pool, then add the connection token that the customer application returned after the last element.
  • If there is no available connection token in the queue, then this connection token will be the first connection token.
  • The connection token obtained from the connection pool must be returned before closing the connection pool, or it could cause a system memory leak.

Argument list:

  • CTG_ConnToken_t gatewayTokenIn: token the connection token return to the connection pool

Return code 0 -- Connection token returned to connection pool successfully.

Figure 5 shows how to return the connection to the connection pool:

Figure 5. Returning the connection to the connection pool
Returning the connection to the connection pool

Closing the connection pool

int disposeConnectionPool()
Example 6. Freeing connection queue and releasing resources
while (pnode != NULL) {
   link qnode;
   qnode = pnode->next;
   rc = closeGatewayConnection(&pnode->data);
   if (rc == RC_OK) {
      free(pnode);
      pnode = qnode;
   }
}

Close the connection pool and release all connections in it. All of the connection tokens in the queue of the connection pool are deleted.

Return code 0 -- Connection pool closed successfully.

How connection pool management increases efficiency

The connection pool holds a set of connections from customer applications and CICS TG that are established prior to transaction start. When a customer application needs to send transaction request to CICS TS, it gets a connection token from the connection pool to send the request, and after transaction completion, it returns the connection token to the connection pool for reuse by other applications. Connection pool management avoids the overhead of establishing and releasing connections for each transaction request, so it improves the overall efficiency, particularly when transaction concurrency is high.

Here are test results from CICS TG V6, showing how efficiency is improved via connection pool management:

  • Persistent connection: Apply connection pool management -- no need to establish and release connection for each transaction request.
  • Non-persistent connection: Do not apply connection pool management -- must establish and release connection for each transaction request.
  • X-axis denotes different COMMAREA lengths.

The comparison shows that connection pool management significantly reduces the average CPU time for a single transaction, particularly when COMMAREA length is short. When COMMAREA length varies from 100 bytes up to about 6K bytes, then using connection pool management reduces CPU time for a transaction by more than 50%:

Figure 6. Reducing average CPU time for single transaction (in milliseconds)
Reducing average CPU time for single transaction (in milliseconds)

Similarly, connection pool management more than doubles the ETR for COMMAREA lengths from 100 bytes up to about 16K bytes:

Figure 7. Increasing external transaction rate (ETR)
Increasing external transaction rate (ETR)

Conclusion

CICS TG provides access to CICS TS with maximum performance, security, availability, and scalability. With connection pool management between customer applications and CICS TG, the end-to-end efficiency from customer applications to CICS TS is further improved, particularly for applications that only require short COMMAREA length. IBM recommends that customers use J2EE for new applications so they can run in J2EE application servers and use JCA Adapters to take advantage of the connection pool management features.

But for existing applications that cannot run in a J2EE environment, customers need to develop the connection pool management to reuse traditional applications and protect their investments. This article showed you how to design and implement connection pool management for non-J2EE applications. While the article used C/C++ applications as examples, the principles apply for applications written in other languages, such as standalone Java (J2SE), Visual Basic. and C#.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into WebSphere on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=WebSphere
ArticleID=838634
ArticleTitle=Connection pool management for connections to CICS Transaction Gateway
publish-date=10032012