Using OpenMP directives
OpenMP directives exploit shared memory parallelism by defining various types of parallel regions. Parallel regions can include both iterative and non-iterative segments of program code.
The #pragma omp pragmas fall into the following general categories:
- The #pragma omp pragmas for defining parallel regions in which work is done by threads in parallel (#pragma omp parallel). Most of the OpenMP directives either statically or dynamically bind to an enclosing parallel region.
- The #pragma omp pragmas for defining how work is distributed or shared across the threads in a parallel region (#pragma omp sections, #pragma omp for, #pragma omp single, #pragma omp task).
- The #pragma omp pragmas for controlling synchronization among threads (#pragma omp atomic, #pragma omp master, #pragma omp barrier, #pragma omp critical, #pragma omp flush, #pragma omp ordered) .
- The #pragma omp pragmas for defining the scope of data visibility across parallel regions within the same thread (#pragma omp threadprivate).
- The #pragma omp pragmas for synchronization (#pragma omp taskwait, #pragma omp barrier)
Including clauses in the #pragma omp pragmas
can fine tune the behavior of the parallel or work-sharing regions.
For example, a num_threads clause can be used to
control a parallel region pragma.
The #pragma omp pragmas generally appear immediately
before the section of code to which they apply. The following code
defines a parallel region in which iterations of a for loop
can run in parallel:
#pragma omp parallel
{
#pragma omp for
for (i=0; i<n; i++)
...
}The following example defines a parallel region in which two or
more non-iterative sections of program code can run in parallel:
#pragma omp parallel
{
#pragma omp sections
{
#pragma omp section
structured_block_1
...
#pragma omp section
structured_block_2
...
....
}
}
For detailed description of the OpenMP directives, see Pragma directives for parallel processing in z/OS XL C/C++ Language Reference.
