Some useful performance features in z/OS C/C++
raym 270001R3GP Visits (4447)
Compliers are an important tool in your development environment. A good optimizing compiler generates performance code without you worrying about the low level details of the OS, internals of the runtime environment and hardware architecture. You can concentrate on the business logic in your application. But optimization can take up a lot of resources, both in terms of compilation time and memory space. XL C/C++ provides an optimization option with 3 levels (called suboption 1, 2 and 3). Level 1 and 2 represent compromise between execution time performance and compile time. This is the appropriate setting in most cases. But there are situation where you want to let the compiler to exploit as much optimization opportunities as it could, regardless of compilation resources. This is what optimization level 3 (suboption 3) does. Experience shows that most of a program's time is usually spent in certain areas of the code (the 80-20 rule), one way of using level 3 is to apply it in those source files which contains hot spots of the application. The rest of the files, responsible for tasks like initialization, termination, error handling, and user interactions, can be compiled with lower optimization levels, getting the best of both worlds.
This leads to the general direction of putting more control into the programmer's hand in controlling actions taken by the compiler. An important technique in optimization is loop unrolling. Unrolling eliminates the loop control checking, which in turn can expose more optimization opportunities between loop iterations. But this is a two edged sword as too much unrolling can increase code size and larger memory footprint for the application. The optimizer normally makes decisions basing on it's analysis of the code. But often times the programmer knows which are the hot loops and can direct the compiler to do unrolling on specific ones. This is the purpose of the UNROLL option and the corresponding pragma directive. You can use these to control which loops to unroll, and by how many times, applying the optimization benefit to code that are most frequently executed.
The idea of execution frequency and its impact on optimization leads to the idea of Profile Directed Feedback (PDF). This is an enhancement to inter-procedural analysis (IPA), and is used together with the IPA option. IPA performances whole program analysis; it looks at code from all source files instead of just one. This leads to many more optimization opportunities than a normal optimizer usually discovers. PDF brings this a step further -- the compiler makes use of profiling information to direct its optimization. The steps to use PDF is as follows: 1) Build the application with the opion PDF1. This results in a load module with instructmentation to collect profiling information. 2) Run the instructmented module with typical input. The instructmented code will produce a data file containing the execution frequency of the code. This is called the training run. 3) Build the application again with PDF2. This is the production build where IPA makes use of the profiling data collected in step 2 to perform aggressive optimization. The result is a load module tuned to run optimally with the typical input used in the training run. In order to use this successfully, input data in the training run must be selected carefully. It is most effective when the production data profile on average doesn't vary too much.
The above are just a few of the features that can boost your program’s performance. You can find out more in the Programming Guide (htt