Archive

Benchmarking the cloud? New metrics required

Share this post:

When I think back to my very first client experience, I realize that life was so easy benchmarking on a single mainframe with a customized cards deck that was profiling most of their production batch jobs behavior.

At that time, workloads were well known and slow to evolve, computing models were limited and same benchmarking process was relevant year after year.

Nowadays the various cloud models, sourcing types and multiple offerings require a different benchmark approach to be fully relevant to cloud environments.

From batch to cloud…

Batch jobs are steady workloads, predictive models or projections apply very well and benchmarking on pure systems performance criteria are relevant (like job elapse time and throughput). In contrast, the cloud model brings new needs because of the service model abstraction and requires new initiatives to develop new cloud benchmarks.

As a leading benchmark organization, the Standard Performance Evaluation Corporation (SPEC) organization has recently created an Open Systems Group (OSG) Cloud computing Working Group to investigate this need to monitor and measure the performance of cloud systems, and IBM has taken an active role in this ongoing initiative that has already delivered a very first public report: Report on Cloud Computing to the OSG Steering Committee.

Why is cloud a benchmark game changer?

Cloud cannot be measured only through previous old batch or even transactional key performance indicators (KPIs) like the Throughput or Response Time indicators. New metrics are required to evaluate their unique characteristics, like this non-exhaustive list shows:

  • Elasticity
    • How quickly a service can adapt to changing customer needs
  • Provisioning response time
    • Time needed to bring up or drop a resource
  • Scale up/down
    • Ability to maintain a consistent unit completion time when solving increasingly larger problems only by adding a proportional amount of storage and computational resources
  • Variability
    • How repeatable is the test result, depending upon any configurations or background load change on the systems under test.
  • Agility
    • Ability to scale the workload and the ability of a system provisioned to be as close to the needs of the workload as possible.

IBM has already worked on this topic and acquired expertise in this new space.

Sharing Cloud Meta-Benchmark by IBM (CloudBench)

Through the previous SPEC OSG report reference, we have shared information about our IBM’s CloudBench meta-benchmark framework designed for infrastructure as a service (IaaS) clouds.

It automates the execution, provisioning, data collection, management and other steps within an arbitrary number and types of individual benchmarks.

CloudBench covers the following functions:

  • Exercise the provisioned VMs by submitting requests to applications (individual benchmarks) that run on the provisioned VMs.
  • Supports Black Box1 testing, with some support to embed data collection nodes inside the system under test (SUT) to collect metrics usually associated with White Box2 tests.
  • Exercise the operational infrastructure by submitting VM provision/de-provision requests to the cloud management platform.
  • Manages multiple application sets. The default workload generates various types of workloads, but can be extended to support local custom application sets.
  • Measures elasticity components: provisioning time, scaleup, as well as variability, agility.

The following figure displays a usual execution flow for CloudBench.

CloudBench’s meta-benchmark execution flow.

This IBM meta-benchmark framework has already demonstrated that we have strong foundations to answer to “new metrics” need in the Cloud.

Such benchmark information sharing to SPEC organization brings new insights and expertise to the community, from our cloud-independent benchmark design.

This is a first but significant step to move cloud benchmarks forward, leaving far behind the old card deck style to master the cloud computing new benchmarking needs…

1 The Cloud-Provider provides a general specification of the SUT, usually in terms of how the End-Consumer may be billed.

2 The SUT’s exact engineering specifications is known and under the control of the tester. The benchmark results allow full comparisons, similar to existing benchmark results.

More stories

Why we added new map tools to Netcool

I had the opportunity to visit a number of telecommunications clients using IBM Netcool over the last year. We frequently discussed the benefits of have a geographically mapped view of topology. Not just because it was nice “eye candy” in the Network Operations Center (NOC), but because it gives an important geographically-based view of network […]

Continue reading

How to streamline continuous delivery through better auditing

IT managers, does this sound familiar? Just when everything is running smoothly, you encounter the release management process in place for upgrading business applications in the production environment. You get an error notification in one of the workflows running the release management process. It can be especially frustrating when the error is coming from the […]

Continue reading

Want to see the latest from WebSphere Liberty? Join our webcast

We just released the latest release of WebSphere Liberty, 16.0.0.4. It includes many new enhancements to its security, database management and overall performance. Interested in what’s new? Join our webcast on January 11, 2017. Why? Read on. I used to take time to reflect on the year behind me as the calendar year closed out, […]

Continue reading