Performance Projections vs "Hope for the Best"By: Bill Buros.
Here's a quick introduction to the notion of understanding what you're trying to do when you are looking for performance improvements.
Over the last month or two, we have been focused on several engagements where we are helping to improve an application's performance on Linux. Being in the Linux Technology Center, we focus on Linux itself, the tools, commands, products, and post-processing analysis of the results. So while we often deal first with Power systems, many of the changes and recommendations made to an application will usually help across platforms. We view that as a Good Thing, and one of the key advantages of Linux. Your investment in improvements will carry across your systems, and your system choices can be driven by the strengths of that platform.
With the numerous experiments done around improving performance, one of the key characteristics a performance team likes to highlight is the process of projecting the result for each experiment. Whether your projection turns out to be right or wrong, it's the mental and analytical approach of trying to understand why you're doing the test, and therefore, what you project the result might be, that is the important aspect.
Too often we see application teams randomly trying tests to see if something makes a difference. While that certainly can be fun and educational when you're just trolling for new insights, that's not our recommended analytical approach. We refer to this approach as "Hope for the Best". We do get a kick out of teams that randomly tried 8-10 things, and then are annoyed because nothing has helped. We politely will work through the approach of gathering specific performance data and information, and then help make more informed assessments about where to focus next.
To support that approach, we often use sar, iostat, mpstat, oprofile and perf to dig into where an application is spending its time and energies. From there, we can zero in on one aspect of the application and system characteristics, design an experiment, and re-run the application. There are of course Java tools, networking tools, disk storage tools, system analysis tools as well.
The optimizations can as simple as making the compiler optimization more aggressive, or recognizing that a Java application is spending a lot of time in garbage collection, or that the application is bottlenecked on disk I/O. You assess the condition, look at the data, and design your next iteration. Then you can do more than simply hoping for the best.