Simulating Your Workload Environment with Domino Server.Planner

Domino Server.Planner runs on any Notes 4.5 workstation and analyzes NotesBench production environment loads against certified vendor data across a range of systems. Domino Server.Planner can be downloaded from the Sandbox.

Russ Lipton, Consultant

Russ Lipton is a new, but passionately committed user of Domino/Notes with 15 years of industry experience as an industry consultant on emerging technologies to Fortune 500 companies. He has published numerous industry articles and written several books, including one on multimedia application development for Random House. He now concentrates his efforts on developing commerce-ready Internet sites using Domino. Russ would like to thank Carol Zimmet and Wendi Pohs for their contributions to this article.



31 March 1997

Introduction

In today's rapidly changing hardware and software environment, it has never been more important to test hardware and software systems against current and projected workloads. That is exactly the problem. With the decision point variables changing so frequently, neither vendors nor customers alone can keep up.

At the same time, Iris recognizes the importance of delivering performance analysis tools to Notes shops. Customers cannot be expected to maintain expensive machine configurations or consume precious human resource on avoidable testing.

Domino Server.Planner, a new Notes application that runs under any Notes 4.5 workstation, simulates production environment loads against certified vendor data across a range of systems, making it possible for you to analyze and publish the results to decision makers within a familiar Notes interface. Domino Server.Planner is currently available in beta for download from Notes.Net.

Answering The Critical Capacity Questions

As always, Notes users must answer at least the following key questions to support their next round of investment decisions:

  • Which servers should I use to match my particular load environment?
  • Which servers should I use, given server consolidation as a performance, maintenance and budgetary priority?
  • How do the answers to the questions above change as my load-mix shifts?

That is, how can I anticipate changes reflecting the evolution of end-user behavior to take advantage of more complex Notes features as they gain expertise (for instance, groupware)? How can I weigh the impact of deploying new and more sophisticated Notes applications over time as server tasks?

These questions are themselves posed against the backdrop of constant changes in hardware systems, operating systems, deployment environments (notably today, the Internet) and, of course, Domino itself.

Vendors, Developers and Customers Working Together

Before describing a typical Server.Planner session, let's look at how the demands of capacity planning have informed its design. Capacity planning to ensure optimal Notes load balancing is a shared responsibility between vendors, Domino developers and customers:

Figure 1. Capacity planning diagram
Capacity planning diagram

Vendors.
Vendors have the needed expertise to run the appropriate workloads against their current and emerging hardware (CPUs, I/O), operating systems or other components.

Their role in Server.Planner is to define profiles for Notes-capable machines and to supply the certified performance data which forms the basis for subsequent customer analysis. This data changes frequently as new systems are developed and will be made available for download by customers at http://www.notesbench.org. [Editor's note: The NotesBench Web site should be available in May 1997.]

Domino Developers.
Naturally, no one understands Notes usage (historical, current and anticipated) better than those who develop the product. The developer's role, beyond the development of Server.Planner itself, is to supply vendors with the workload scripts they need to run against their systems.

These scripts reflect the understanding of Notes architecture and internals as well as the applications that Notes users have performed over the years and will require for today's Web-driven computing.

The scripts, which the developers supply to vendors, stress-test their systems across a wide range of expected server loads (for instance, replication) as well as user or application loads. The latter range from simple mail to discussion groups; from shared scheduling to complex collaboration.

Analysts and Decision Makers.
Once data has been made available by vendors, customers can build customizable queries to test both actual and potential load scenarios. While we distinguish "analyst" and "decision maker" roles, both roles are typically fulfilled by customers and, sometimes, by the same person.

Support for query customization and decision support was the prime design goal of Server.Planner.

Exercise Those Cycles: NotesBench

The production of objectively verifiable, relevant data is the heart of the capacity planning process.

Data value depends strictly on the quality of the simulations. The Server.Planner Vendor database is generated by carefully constructed NotesBench workload results. The scripts execute for periods as long as 6 to 8 hours (2 hours minimum is recommended to vendors), generating data across a wide range of loads.

Currently, scripts have been designed to track these separable Notes tasks:

  • Mail
  • Mail and Discussion databases
  • Groupware (Mail, Discussion Databases, Full-Text Search and related tasks)
  • Replication Hub
  • Mail Hub

Additional scripts will be supplied to vendors in the future to track:

  • Discussion Database activities only
  • Calendar and scheduling
  • Web walker and buyer activities

"Web walkers" are defined as users who execute simple Web browsing strategies that emphasize static document retrieval and display.

"Web buyers" are defined as site users who perform complex commerce-typical activities. These tests yield results that simulate concurrent usage, forms completed, megabytes added to the site and more.

In response to user feedback, Iris has also added a "probe" task that runs at the same time as NotesBench, opening and closing databases. This simulates additional ad hoc usage that complements test execution of recurring applications. It returns relevant indications of server task response under load conditions.

The NotesNum component of NotesBench then rolls up the data collected by NotesBench workloads and the Probe task to provide average and frequency distribution of response times for submission to the vendor database of Server.Planner.

The NotesBench Consortium and Certification

Each of the participants in the Server.Planner cycle has a vested interest in ensuring that test data is precise, non-modifiable and certifiable. This certainly includes vendors.

Currently, IBM, Compaq, Digital, HP, NCR and KMDS have joined the NotesBench Consortium. The Consortium is a vendor group dedicated to working with Lotus to ensure data quality, as well as to enable us to enhance Server.Planner appropriately in the future.

The scripts used by NotesBench, including the Probe as well as the roll-ups performed by NotesNum are sealed against modifications of any kind. Data points downloaded from vendors to the NotesBench Web site are digitally signed. Customers can be confident that the data forming the basis for their queries within Server.Planner is reliable.

The graphic below shows how NotesBench interacts with Server.Planner:

Figure 2. NotesBench and Server.Planner diagram
NotesBench and Server.Planner diagram

Analyzing The Data With A Simple Query

As vendors make data available, analysts can perform queries against that data with Server.Planner. Here is a recipe for performing a simple query:

  1. Select "Query on Data" from the Server.Planner Navigator.
    Figure 3. Server.Planner Navigator screen
    Server.Planner Navigator screen
  2. Choose "Create Query On Data" from the pull-down menu.
  3. Select the tasks performed by your users. Select "Replication Hub" if the server performs database replication; "Mail Hub" if it handles mail; "User Tasks" if it executes end-user database applications. You can select any combination of these activities or all.
  4. For each task selected (we show Replication below), you can specify the amount of the load to be simulated (an * indicates that the test will accept the default values provided).
  5. Select the desired response time for an end user connecting to the server to complete a given computational task (fast is defined as sub-second response time; medium as 1 to 3 seconds; slow as 3.1 to5 seconds).
  6. Define the needed performance precision. Using the "medium" response time shown above, a setting of "75" (see illustration below) means that the server must provide a response that falls within the "fast and medium response" at least "75%" of the time.
  7. Observe that the results returned are cumulative, not discrete. Asking for "medium performance" really means we want either fast (wonderful) or medium (reasonable) performance for a given set of conditions.
  8. Specify the tolerance level for the query. This adjusts for the fact that the test results must balance different vendors using different system configurations across different tests. For example, assuming that "100 spokes" were specified above for replication results, a query tolerance of 10% (see illustration below) means that results will be reported for tests that covered 90 to 110 spokes. A setting of "0" would require return of results for 100 spokes only.
  9. Select the operating systems for which to include test results (any combination).
  10. Choose the vendors to be included in the query (any combination).
  11. Finally, click the "Query" button on the Action Bar to display the results of the query.

It is that easy to build a query and return meaningful results.

Planning The Future

Analyst queries constructed within Server.Planner can, of course, be saved to one or several decision support databases for distribution to those with a need to know. Discrete query steps (for instance, as performed serially above) are collected together in a single form for study at a glance.

If your Notes workstation includes Lotus Components, you can graph query results conveniently by the response time returned for a given configuration (fast, medium, slow) or by the cost of ownership. The graph features works like a toggle switch, alternating sorts in ascending or descending order.

Figure 4. Machine cost table
Machine cost table

Decision makers can also inspect vendor machine profiles as well as the results of individual scripts that were run against that system or related systems. Graphs defined by the same query condition parameters can display:

  • Script results for a single system for a given vendor,
  • Results from several systems for a given vendor,
  • Results from systems from multiple vendors compared to the system of a given vendor.
Figure 5. System Perfomance: Replication Hub table
System Perfomance: Replication Hub table

For instance, a particular vendor configuration may show degradation for groupware usage beyond the 300 user level but superior performance for small loads. Another vendor's system might drag at the lower level but perform more nimbly against heavier loads.

Comparing graph results can answer typical trade-off questions. For instance:

  • Which machine offers the best performance where price is no object?
  • Which configuration offers acceptable performance at the lowest possible price?

Iris expects to build dedicated modeling support into a future version of the tool. Meanwhile, current features enable significant informal modeling through the ability to save and reuse queries as fresh vendor data becomes available on a monthly or quarterly basis.

By simple projection of expected future needs in, say, six months (100 additional users, use of servers for replication, extension to incorporate groupware, etc., queries can be run today to anticipate future server requirements. One or more decision maker databases can be saved that reflect varying growth scenarios over time.

Performance modeling and capacity planning will become ever more critical as intranets and extranets are developed for enterprise-wide (and cross-partner) systems around the world. Server.Planner provides the foundation for a new generation of decision support to support your Notes environment.

Complete documentation

For complete details on setting up and using Domino Server.Planner, download the Beta documentation (8544Kb).

[Editor's note: As with any performance analysis tool, we are obliged to offer the disclaimer that Server.Planner results are based on lab results. Mileage will always vary within your own environment.]

Copyright 1997 Iris Associates, Inc. All rights reserved.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into IBM collaboration and social software on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Lotus
ArticleID=12708
ArticleTitle=Simulating Your Workload Environment with Domino Server.Planner
publish-date=03311997