IC5Notice: We have upgraded developerWorks Community to the latest version of IBM Connections. For more information, read our upgrade FAQ.
Topic
  • 2 replies
  • Latest Post - ‏2010-06-09T14:43:36Z by Rohit_R
Rohit_R
Rohit_R
24 Posts

Pinned topic SCA - Interesting results for bindings performance test?

‏2010-06-02T16:12:47Z |
We are trying out various SCA bindings to see which one would work best for us and did a small performance test for various bindings.

The client is a micro-flow which synchronously invokes an SCA service over various bindings. The micro-flow contains a simple loop so that it perform the invoke multiple times (sequentially) based on input iteration count. To compare the results we also included classic EJB3 bindings.

The tests were run in two scenarios
1) Open SCA services and clients running in same JVM (WPS)
2) Open SCA service on a WAS node and client processes on WPS node.

The comparison graph is attached as images below.

Two observations which we are unable to explain
1) SCA over WS binding is performing substantially better than other bindings
2) SCA native binding performance was significantly worse in distribute environment vs. same server.
Updated on 2010-06-09T14:43:36Z at 2010-06-09T14:43:36Z by Rohit_R
  • SteveKinder
    SteveKinder
    14 Posts

    Re: SCA - Interesting results for bindings performance test?

    ‏2010-06-04T14:29:55Z  
    Hi Rohit,

    Thanks for sharing your performance runs, it is clear you are really doing a deep dive on the variety of capabilities we've been delivering in our feature pack updates for WebSphere which is fantastic. Unfortunately, there is going to be no good way to respond to the specific data you've collected using this forum. Our performance group has a stringent set of procedures to isolate runs and I am not questioning the precision of your runs, just that it is not possible for us to assess your run without knowing intimate details about your hardware, memory footprints, network connectivity, your SCA benchmark itself, configuration options, what levels of software you have, etc. etc; which is not easily done through the developer works forum.

    Data-bindings, serialization choices, payload size and QOS all can drastically effect the measurements, and each protocol behaves differently based on these conditions. Also, consider the effect of trivial workloads and its effectiveness in measuring real production performance. Many "benchmarks" do echos or "pings" where the business logic itself is essentially non-existent. IBM and middleware providers use these trivial workload benchmarks so we can work on runtime infrastructure, but they exaggerate the differences you'll actually see in production; since in many cases, the business logic itself makes up the vast majority of processing time. Also, we need to consider specifically "java" benchmarks. We've found that we need to "warm-up" the run and have long iteration periods, with very controlled JVM settings to stabilize the effect of things like initialization, JIT, and Garbage Collection which can interfere with producing repeatable results that mirror what will happen in production workloads. We have seen a lack of precision in any of these areas can create a wide range of variance in measurements.

    We did recently find a performance problem in the default remote binding working with another customer which your workload may also be experiencing, however it is unlikely to explain away your entire run variance; there are probably configuration differences which are also contributing to the differences you are observing. We have observed an "interesting" side-effect of turning on WebSphere global security is that the default binding automatically enables transport level security whereas the web service transport does not; and this may be also be contributing to your disparity between measurements. Another point of variability we've observed with customer applications is comparing the dynamic service lookup "getService" to static binding definitions.

    I cannot tell if your performance experiments are out of technology curiosity or if you are making architectural decisions based on your findings. I think either reason is fine, if the latter though, I'd really like to understand if there is a real business workload you are forced to change technologies due to performance. We'd be happy to work through more specific questions about your runs, but we've found through many years of performance related questions, the best approach for us is to you contact your account team and have them help facilitate a discussion. We are excited to see all of your activity on the forum and looking forward to hearing more from you in the future as you continue to explore our product features.

    Steve
  • Rohit_R
    Rohit_R
    24 Posts

    Re: SCA - Interesting results for bindings performance test?

    ‏2010-06-09T14:43:36Z  
    Hi Rohit,

    Thanks for sharing your performance runs, it is clear you are really doing a deep dive on the variety of capabilities we've been delivering in our feature pack updates for WebSphere which is fantastic. Unfortunately, there is going to be no good way to respond to the specific data you've collected using this forum. Our performance group has a stringent set of procedures to isolate runs and I am not questioning the precision of your runs, just that it is not possible for us to assess your run without knowing intimate details about your hardware, memory footprints, network connectivity, your SCA benchmark itself, configuration options, what levels of software you have, etc. etc; which is not easily done through the developer works forum.

    Data-bindings, serialization choices, payload size and QOS all can drastically effect the measurements, and each protocol behaves differently based on these conditions. Also, consider the effect of trivial workloads and its effectiveness in measuring real production performance. Many "benchmarks" do echos or "pings" where the business logic itself is essentially non-existent. IBM and middleware providers use these trivial workload benchmarks so we can work on runtime infrastructure, but they exaggerate the differences you'll actually see in production; since in many cases, the business logic itself makes up the vast majority of processing time. Also, we need to consider specifically "java" benchmarks. We've found that we need to "warm-up" the run and have long iteration periods, with very controlled JVM settings to stabilize the effect of things like initialization, JIT, and Garbage Collection which can interfere with producing repeatable results that mirror what will happen in production workloads. We have seen a lack of precision in any of these areas can create a wide range of variance in measurements.

    We did recently find a performance problem in the default remote binding working with another customer which your workload may also be experiencing, however it is unlikely to explain away your entire run variance; there are probably configuration differences which are also contributing to the differences you are observing. We have observed an "interesting" side-effect of turning on WebSphere global security is that the default binding automatically enables transport level security whereas the web service transport does not; and this may be also be contributing to your disparity between measurements. Another point of variability we've observed with customer applications is comparing the dynamic service lookup "getService" to static binding definitions.

    I cannot tell if your performance experiments are out of technology curiosity or if you are making architectural decisions based on your findings. I think either reason is fine, if the latter though, I'd really like to understand if there is a real business workload you are forced to change technologies due to performance. We'd be happy to work through more specific questions about your runs, but we've found through many years of performance related questions, the best approach for us is to you contact your account team and have them help facilitate a discussion. We are excited to see all of your activity on the forum and looking forward to hearing more from you in the future as you continue to explore our product features.

    Steve
    Steve,
    thanks for the detailed response. I understand that this forum might not be the ideal place for discussing the details of this deployment as there could be many hardware and software parameters leading to the results we are getting in our environment, so we will get in touch with the Partner world support to discuss this in further detail.

    Regards
    Rohit