• Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

Comments (21)

1 psan22 commented Permalink


Could you please explain why var://service/time-started has always a value greater than "0"?

2 HermannSW commented Permalink

Sorry for answering late.

"var://service/time-started" is after front side parsing complete, so
* request headers have been read
* and front side parsing is done.
This takes some time ...

3 DPRaj commented Permalink

0.008562357 36.3295437 0.015739276 36.22807977 36.33503193 33.67753808 1.016060365 383.8876666 430.8162007 384.0592884 430.8153445 474.396009 423.9146262 384.159242 36.2291798 36.32795609

This is average Latency for one service. Here 8-14, the average is more than 300. I am getting this value because of Backend. Sometime I am getting LATENCY BREACH error. How can I fix this.

4 HermannSW commented Permalink

8-14 are all after "8 response headers received", so your backend does not reply for 5 minutes.
You need to figure out why.


5 haranath commented Permalink

If any transaction fails due to some reasons like connection error,xslt execution error, network error etc. Can we get latency logs for that transaction ? does the latency logs shows the same transaction ID that we get in error messages ?.

For example , one transaction failed while executing xslt and the error message shown in logs (trans(161406144)[response][XX.XX.XX.XXX]: Execution of 'local:///abc/def/xsl/testResponse.xsl' aborted:). Can I get the latency logs with same transaction ID(161406144) ?.

6 HermannSW commented Permalink

I do not think that latency records will be written to log in error case, please try it out.

7 HermannSW commented Permalink

Did just add <EDIT> section at top of above posting pointing to "dpShowLatency" Mozilla Firefox add-on for analyzing IBM DataPower latency log entries.</EDIT>

8 RajeevDP1 commented Permalink

Hi Hermann, I am seeing more time is taken by 7(frontside parsing complete) 6(frontside stylesheet ready) and 4(frontside transform complete) during performance testing of services. What all tuning can be done so that we reduce the time taken by these steps (7,6,4)

Rajeev S

9 HermannSW commented Permalink


more time on "7 front side parsing complete" is to be expected if you test under high concurrency. On 6, do you say that he value of 6 is higher, or that the difference to 6 (that is the relevant value) is getting higher under performance testing?

10 Prasad MHSV commented Permalink

Hi Hermann,

I am trying to capture the time spent for each request on datapower alone without considering the backend processing time.
However i want to capture this information without using the 16 latency values.
The above post clearly states that using the 4 read-only service variables it is possible to calculate the latency for a transaction.
Here are our observations:
1. "var://service/time-started" value is zero. we are fine with this value.
2. "var://service/time-forwarded" value is constant as expected and it gives us the time spent on request processing.
3. But "var://service/time-response-complete" value is not constant at all the actions in the response processing rule.So i am not sure which value to consider for the latency calculation.
Since this value indicates time for "back side parsing complete" , we assumed it should be constant irrespective of how many transform actions are there in response rule.
Please correct me if this is a wrong assumption.
4. The problem with using "var://service/time-elapsed" value is , it is not accurate compared to the latency data point values(16 points).
this is the reason we cant use this "var://service/time-elapsed" variable in our response stylesheet to calculate the total time spent in response processing.
Because on reading the "time elapsed" variable value at different places in a stylesheet gives different values.
So could you please suggest how accurately we can get the latency values(time spent on DP alone) by using the below 3 service variables?
var://service/time-forwarded (5)
var://service/time-started (7)
var://service/time-response-complete (14)
And also if we use "var://service/time-elapsed" variable ,how much deviation is expected compared to true latency captured using 16 data points.

Add a Comment Add a Comment