Regulation of message flows in WebSphere Process Server – Part 1
Phani_Madgula 060000NHGT Visits (5919)
I was wondering! When we try to integrate disparate systems, there is always a possibility of difference in the processing capabilities of end points. For example, consider the situation as described in the Figure 1 below:
In the above diagram SIEBEL is able to pump more requests to the integration module where as the WebService end point may not have been configured to take this entire load.
The thick and thin arrow lines represent the request injection capacity at the source and request intake capacity at the target respectively.
Now the question is how to regulate the message flow? The specific questions to address are as follows.
In my recent work on the support questions related to the above topic, I gathered some knowledge on this topic. Let me try to answer the question (1) in this blog post and I will answer the question (2) in my next blog post.
I came across a developerWorks article on WebSphere Process Server throughput management. This article provided great insight into answering the question (1). Of course, the solution explained in the article is targeted for WebSphere Process Server V6.2. For WebSphere Process Server V7.0, the solution can be achieved more elegantly using a in-built mechanism called store-and-forward which I will explain later.
Using the solution explained in the article, I can introduce two BPEL components as described in the Figure 2 below.
For global flag, I can use Object Cache capability provided by WebSphere Application Server to define a shared variable that can be read and updated by Request Injector and Request Propagator modules.
I must say, WebSphere Process Server V7.0 provides an even more elegant solution. It introduced a new feature called store-and-forward to address the issues related to the target service being not available. This new feature allows configuring the store-and-forward qualifier on the imports. When the target services are invoked asynchronously through the imports on which the store-and-forward qualifier has been configured, only for the first request a failed event would be generated when the target service is unavailable. The subsequent requests will be just stored in the import queue. The import, on which the store-and-forward qualifier is configured, is marked as a control point in the recovery sub system. The status of the control point is set to store at this juncture. This saves a lot processing as the subsequent requests anyway fail and generate failed events continuously.
When the target service becomes available, all you need to do is to re-submit the initially generated failed event and set the status of the control point to forward through the store-and-forward widgets in the Business Space. This relays the stored requests to the target service automatically. For more information, read through this developerWorks article.
In this case, you do not need Request Injector and Request Propagator modules. This is a simpler solution than the previous one described. All you need is, store-and-forward qualifier configured on the import that is wired to the target end point.
The solution using the store-and-forward is described in the below Figure 3.
In my next posting, I will discuss the thick and thin arrow problem. Hope you enjoyed the topic!! Thank you.