Extending the Coordinated Request Reply Global Cache sample

You can use the global cache to share data across processes that are running in the same integration node or across multiple integration servers. In this sample, the Coordinated Request Reply Global Cache application is deployed to two integration servers: CoordinatedRequestReplyExecGroup and AdditionalCoordinatedRequestReplyExecGroup. The application contains a request message flow and a reply message flow. When the sample is deployed, the same application is deployed to both integration servers. Therefore, the sharing of data across integration servers is seamless and you are unaware of which container server (running on an integration server) contains the data that is added by the request flow.

To demonstrate that the global cache is shared by more than one integration server, complete the following steps.

  1. Ensure that the Coordinated Request Reply Global Cache application and Coordinated Request Reply Backend application are deployed to both integration servers.
  2. In integration server AdditionalCoordinatedRequestReplyExecGroup, stop both applications.
  3. In integration server CoordinatedRequestReplyExecGroup, stop the reply message flow in the Coordinated Request Reply Global Cache application.
  4. Send a message through the request flow that is running on CoordinatedRequestReplyExecGroup. In the requester test client in the Coordinated Request Reply Global Cache application, click Enqueue, then click Send Message.
  5. Stop the Coordinated Request Reply Global Cache application that is running on CoordinatedRequestReplyExecGroup.
  6. Start the Coordinated Request Reply Global Cache application on AdditionalCoordinatedRequestReplyExecGroup.
  7. Retrieve the reply message, which has been processed on AdditionalCoordinatedRequestReplyExecGroup. In the requester test client in the Coordinated Request Reply Global Cache application, click Dequeue, then click Get Message.

You can see that data added by the request message flow that is running on integration server CoordinatedRequestReplyExecGroup was retrieved by the reply message flow that is running on integration server AdditionalCoordinatedRequestReplyExecGroup.

IBM Integration Bus also supports the use of an external WebSphere eXtreme Scale grid to store and retrieve data. To demonstrate the use of an external grid, complete the following steps.

  1. Create a configurable service that specifies the host name, port, and grid name of your external grid. For more information about creating this configurable service, see Connecting to a WebSphere eXtreme Scale grid in the IBM Integration Bus information center. You can create configurable services either by using the mqsicreateconfigurableservice command, or IBM Integration Explorer. Here is an example of the command:
    mqsicreateconfigurableservice IB9NODE -c WXSServer -o xc10Connection
    -n catalogServiceEndPoints,gridName,overrideObjectGridFile,securityIdentity
    -v "server.ibm.com:2809","myGrid","C:\Brokers\WebSphere_eXtreme_Scale\xc10\xc10Client.xml","id1"
    The following example shows the "Create Configurable Service" wizard in IBM Integration Explorer. To access this wizard, expand the integration node to which you want to add the configurable service, right-click Configurable Services, then click New > Configurable Service.
    Using IBM Integration Explorer to create a new Configurable Service
    Note that the parameters "overrideObjectGridFile" and "securityIdentity" might be optional, depending on the setup of your external grid. If you need a security identity, create one by using the mqsisetdbparms command, as described in the Connecting to a WebSphere eXtreme Scale grid.
  2. In both message flows in the Global Cache application, open each JavaCompute node and edit the "getGlobalMap()" method so that it has two input parameters: the map name that you want to use and the name of the configurable service that you just created; for example,
    MbGlobalMap myMap = MbGlobalMap.getGlobalMap("EXAMPLE.LUT", "xc10Connection")
    Note that some restrictions exist for map names on an external grid. For more information, consult the WebSphere eXtreme Scale documentation. A map name of <map_name>.LUT works for the purposes of this sample.
  3. Redeploy the Global Cache application to both integration servers, then run the sample as before. The sample runs in the same way as it did for the embedded cache, but now the MQMDs are stored on your external grid.

The embedded cache supports an eviction policy to manage the length of time for which data exists in the cache. To demonstrate data eviction in the sample complete the following steps.

  1. In the request flow, open the JavaCompute Node named "StoreOriginalMQMD" and change the following function call
    MbGlobalMap myMap = MbGlobalMap.getGlobalMap().put(msgID, mqmd)
    so that the "getGlobalMap()" method is
    getGlobalMap("example", new MbGlobalMapSessionPolicy(30))
    This example shows how to specify an eviction time of 30 seconds for the data that is put in the map in this session. You can specify your own map name and eviction time. You cannot use the default map in this case as it can be accessed only by using the "getGlobalMap()" method. It cannot take an MbGlobalMapSessionPolicy object as a parameter, thus data put to it will expire only when the cache itself does.
  2. In the reply flow, open the JavaCompute Node named "RestoreOriginalMQMD" and again change the following function call
    MbGlobalMap.getGlobalMap().get(correlId)
    so that the "getGlobalMap()" method is
    getGlobalMap("example")
    where example is the name of the map that you specified in the request flow. As we are not changing any data in the map there is no need to specify an eviction policy here, the map object still points at the same map as the names specified in each JavaCompute Node are the same.
  3. Also in the reply flow, delete the JavaCompute Node named "RemoveMQMDFromCache". This node is no longer necessary as now the MQMD that is stored in the cache will be evicted after the length of time specified in step one (30 seconds in the example case), as opposed to the manual removal that the removed node performed.
  4. Redeploy the Global Cache application to both integration servers, then run the sample as before. The data is now being evicted dynamically after the specified eviction time, rather than being removed manually by the final JavaCompute node. For more information about eviction policies in the global cache read the "Interaction with the global cache" section of Embedded Global Cache.

Back to sample home