Comentarios (21)

1 psan22 ha hecho un comentario el Enlace permanente

Hermann

 
Could you please explain why var://service/time-started has always a value greater than "0"?
 
Thanks
san

2 HermannSW ha hecho un comentario el Enlace permanente

Sorry for answering late.

 
"var://service/time-started" is after front side parsing complete, so
* request headers have been read
* and front side parsing is done.
 
This takes some time ...
 
Hermann-

3 DPRaj ha hecho un comentario el Enlace permanente

0.008562357 36.3295437 0.015739276 36.22807977 36.33503193 33.67753808 1.016060365 383.8876666 430.8162007 384.0592884 430.8153445 474.396009 423.9146262 384.159242 36.2291798 36.32795609

 
This is average Latency for one service. Here 8-14, the average is more than 300. I am getting this value because of Backend. Sometime I am getting LATENCY BREACH error. How can I fix this.

4 HermannSW ha hecho un comentario el Enlace permanente

8-14 are all after "8 response headers received", so your backend does not reply for 5 minutes.
You need to figure out why.

 
Hermann.

5 haranath ha hecho un comentario el Enlace permanente

If any transaction fails due to some reasons like connection error,xslt execution error, network error etc. Can we get latency logs for that transaction ? does the latency logs shows the same transaction ID that we get in error messages ?.

 
For example , one transaction failed while executing xslt and the error message shown in logs (trans(161406144)[response][XX.XX.XX.XXX]: Execution of 'local:///abc/def/xsl/testResponse.xsl' aborted:). Can I get the latency logs with same transaction ID(161406144) ?.

6 HermannSW ha hecho un comentario el Enlace permanente

I do not think that latency records will be written to log in error case, please try it out.

7 HermannSW ha hecho un comentario el Enlace permanente

Did just add <EDIT> section at top of above posting pointing to "dpShowLatency" Mozilla Firefox add-on for analyzing IBM DataPower latency log entries.</EDIT>

8 RajeevDP1 ha hecho un comentario el Enlace permanente

Hi Hermann, I am seeing more time is taken by 7(frontside parsing complete) 6(frontside stylesheet ready) and 4(frontside transform complete) during performance testing of services. What all tuning can be done so that we reduce the time taken by these steps (7,6,4)

 
Regards,
Rajeev S

9 HermannSW ha hecho un comentario el Enlace permanente

Hi,

 
more time on "7 front side parsing complete" is to be expected if you test under high concurrency. On 6, do you say that he value of 6 is higher, or that the difference to 6 (that is the relevant value) is getting higher under performance testing?
Hermann,

10 Prasad MHSV ha hecho un comentario el Enlace permanente

Hi Hermann,

 
I am trying to capture the time spent for each request on datapower alone without considering the backend processing time.
However i want to capture this information without using the 16 latency values.
 
The above post clearly states that using the 4 read-only service variables it is possible to calculate the latency for a transaction.
 
Here are our observations:
1. "var://service/time-started" value is zero. we are fine with this value.
2. "var://service/time-forwarded" value is constant as expected and it gives us the time spent on request processing.
3. But "var://service/time-response-complete" value is not constant at all the actions in the response processing rule.So i am not sure which value to consider for the latency calculation.
Since this value indicates time for "back side parsing complete" , we assumed it should be constant irrespective of how many transform actions are there in response rule.
Please correct me if this is a wrong assumption.
 
4. The problem with using "var://service/time-elapsed" value is , it is not accurate compared to the latency data point values(16 points).
this is the reason we cant use this "var://service/time-elapsed" variable in our response stylesheet to calculate the total time spent in response processing.
Because on reading the "time elapsed" variable value at different places in a stylesheet gives different values.
 
So could you please suggest how accurately we can get the latency values(time spent on DP alone) by using the below 3 service variables?
var://service/time-forwarded (5)
var://service/time-started (7)
var://service/time-response-complete (14)
 
And also if we use "var://service/time-elapsed" variable ,how much deviation is expected compared to true latency captured using 16 data points.
 
Thanks,
Prasad

11 HermannSW ha hecho un comentario el Enlace permanente

Hi,

 
> 1. "var://service/time-started" value is zero. we are fine with this value.
>
This is not always true, if you process big XML input that value will be definitely greater than 0.
 
 
(5) - (7) is the best you can have for request rule processing time.
 
On response rule you only have one value (14) which corresponds to (7). In order to do what you want you need to add a very last xform action to response rule that computes var://service/time-elapsed - (14). Sorry, that is the best you can get at transaction runtime.
 
 
Btw, you will like the far better granularity of extended latency messages available in 7.0.0.0 firmware:
http://www-01.ibm.com/support/knowledgecenter/api/content/SS9H2Y_7.0.0/com.ibm.dp.xg.doc/latency_messages.html
 
Hermann.

12 ansmyke ha hecho un comentario el Enlace permanente

Hi Herman,

 
Thanks for this post,
I am using firmware XI52v6.0.1.0
 
I am seeing weird latency log entry for a simple http get call. The entry is as;
0 289 0 0 0 0 0 289 290 289 290 290 290 289 0 289
 
1)
I am unable to get that how value at position "2" can be higher than value at "5" as My MPGW is in streaming mode so header should be transmitted to backend before the entire message body is sent.
 
2) Values at position 2 (or 289) should not be equal to value at position 8(289) as between request and response there is an external API which returns the response.
Here is the call detail;
 
Dp_> http proxy call-> firewall -> internet-> external api ->firewall -> http proxy-> DP
 
 
 
Thanks,
Mayank

13 HermannSW ha hecho un comentario el Enlace permanente

Hi Mayank,

 
as stated above the entries need to be permuted to give the correct order.
Although you mentioned streaming (which may show different order) all seems to be fine here.
 
Please see your Latency log entries analyzed by Mozilla Firefox addon mention at top of this blog posting -- I do not see anything suspicious:
 
Hermann.

14 HermannSW ha hecho un comentario el Enlace permanente

Because of streaming some latency entries do not get populated and show up as "0".
You are right that (2) < (5) if (5) is not 0.

15 ansmyke ha hecho un comentario el Enlace permanente

Thanks for the explanation I get the first point but sorry I didn't quite get your response for the second point. My 2nd point was that there are several hops(Dp-> http proxy call-> firewall -> internet-> external api ->firewall -> http proxy-> DP) between

 
"entire request transmitted"/ "request headers sent"
and
"response headers received"
 
How come latency logs shows same time entry for both? I run the same test with Buffer mode set for request and repsonse at MPGW but get again get same value for these two events, why? Shouldn't "Response header recieved" count
entire request transmitted +Backend API processing time + network latency
 
Mayank

Añadir un comentario Añadir un comentario