CE average response times

For considering the CE average response times across the different CE/PE guest CPU scenarios a single average response time for all individual CE transaction types has been calculated.

This value is an arithmetic mean across all CE operation categories (Create / Update / Retrieval). The next chart shows these overall average response times for the CPU scenarios in relation to their throughput levels.

Figure 1. Non-weighted average response times for CE operations in relation to the throughput numbers and CPUs used
Line graph showing normalized throughput and average response times for CE numbers.

Observations

For this particular workload using direct Java API calls, response times below 200 milliseconds are expected and good. The average response times measured are stable and low over most of the workload range. At the very highest workload level, as the CE/PE guest CPUs near maximum utilization, API response times also start to increase (while remaining well below 1 second).

The CE average response times become much better the more CPUs are available on the CE/PE guest. The best response times are achieved with 8 CPUs. In the workload range where the CPUs are limiting the throughput for a certain scenario (usually greater than 80% CPU load), the response times start to increase. Adding further CPUs to such a CPU bound system takes the response times down again and allows further throughput increase. Having in mind that more virtual CPUs on the CE/PE guest for the same workload level are not leading to additional CPU overhead (see figure 9) but improving the response times, seem to indicate that the system benefits from a higher degree of parallelism.