Best practices

2.1.0+

Best practice recommendations for IBM® watsonx Code Assistant for Z Code Optimization Advice are based on the experience of IBM® customers, service representatives, and quality assurance testers. These best practices are not requirements and might not fit all environments. The intent is to provide general guidance in areas of concern in the practice of using the software.

Avoid overwriting files
2.5.0+
Ensure that for every analysis run the OUTPUT= field is uniquely specified in the JCL to avoid overwriting previously generated reports.
Metrics and elapsed time
Z Code Optimization Advice uses CPU time as a measure of CPU usage, as the primary metric to evaluate performance. Avoid referring to elapsed time as there are many extraneous factors that can affect this value that are unrelated to your program's performance.
Test data coverage
Use test data with code execution coverage representative of typical execution in your production environment. This allows for a more accurate representation of application performance and improves the validity of the problem priority and performance recommendations. Do not tune your test data to adjust the rankings or measurements of particular problems or programs. Doing so may cause Z Code Optimization Advice to recommend fixes with inflated significance that are not significant in typical use. Unlike functional testing, it is important for data used in performance testing to be representative of typical usage, not to cover every use case.
Test data volume
Use a high volume of test data, as the profiling report is more indicative of typical execution in your production environment when more data is used.
Applications should run for a minimum of 10 seconds. A runtime of 60 seconds or more is recommended.
Ensuring valid results
Run consecutive performance tests under similar system environment conditions to eliminate potential extraneous variables that may skew performance results.
Resolving performance problems
Do not rely solely on the problem performance impact metric to decide if a problem is worth resolving or not. This value does not explicitly predict how much CPU usage will be saved if a given problem is resolved. There are limited exceptions where all measured instructions are completely removed if the problem is resolved.
Evaluating the optimization potential of an application
Use the CPU allocation card on the Application page to quickly determine if an application's CPU performance is significantly impacted by COBOL and if it is a good candidate for optimization. Applications with low percentage of COBOL CPU usage may have less optimization potential than applications with higher COBOL CPU utilization.
Then, use the Top programs for optimization card to identify high-CPU consuming COBOL programs.
Next, navigate to the Problems tab and filter for those COBOL programs, taking note of the volume of critical and high priority problems. Applications that have a high percentage of COBOL CPU usage, with COBOL programs that have a high volume of high priority performance problems are likely to benefit significantly from performance optimization.
Metrics for comparing reports
Use the same test data when comparing reports for the most accurate results.
When comparing optimization reports, use CPU time as a metric for comparison for the most accurate measure of performance improvement.
Comparing samples is a good relative measure of performance within a single report, but less effective between different reports because the results are highly variable. The number of samples may vary between runs even with the same code and data.
Comparing CPU usage percentage is also a good relative measure of performance within a single report, but less effective between different reports because the total always equals 100% across programs, systems, and subsystems. A change that reduces CPU usage in one area will show an increase in other areas to compensate.