Code that uses elements of structures with multiple REFERs can be very expensive: each reference uses one or more
costly library call to remap the structure. Many PL/I users have long known that the of multiple REFERs created a
black hole for performance.
Now, with Enterprise for z/OS 4.1, for structures where all the elements are byte-aligned, those calls will be avoided and
straightforward inline code generated (because if all elements are byte-aligned, no padding is possible and thus the
address calculations are relatively simple).
To insure all elements are byte-aligned
- specify UNALIGNED on the level-1 part of the declare
- declare any NONVARYING BIT as ALIGNED
Enterprise PL/I has always supported "named constants", i.e. scalars declared with the VALUE attribute, which make your
code more maintainable than using the constants as is, but which also allow the compiler to produce much better code
than scalars declared with the INITIAL attribute.
With Enterprise 4.1 (actually even with 3.9 although it was not documented then), you can now declare named constants in
structures (as long as all the leaf elements of such a structure have the VALUE attribute and as long as the structure contains
no arrays or unions). This allows you to define namespaces of constants and allows you to convert easily a STATIC structure
that consisted of only scalars with the INITIAL attribute to a structure for which the compiler can generate much better code.
For example, if you had this structure,
1 group_rcs static,
2 ok fixed bin(31) init(0),
2 warning fixed bin(31) init(1),
2 error fixed bin(31) init(3);
you could convert it to
2 ok fixed bin(31) value(0),
2 warning fixed bin(31) value(1),
2 error fixed bin(31) value(3);
and the references to group_rcs.ok, group_rcs.warning and group_rcs.error in your code would work more efficiently
without any change.
PL/I has complex rules for how structures are mapped and for when padding is inserted.
However, it can be important to know when there is padding in a structure (for example,
if you pass that structure to a program compiled by a language, such as C or COBOL,
that may map the structure differently).
With Enterprise PL/I 4.1, if you specify the new NOPADDING suboption to the compiler's
RULES option, the compiler will issue an E-level message for any structure that contains
RULES(NOPADDING) is also useful in detecting changes that might seem innocent but could
be dangerous because of the introduction of padding. For example, there is no padding
in this structure
2 b fixed bin(31),
3 d char(1),
3 e char(2),
2 f fixed bin(31);
but if the 2-byte field E were changed from CHAR(2) to FIXED BIN(15) so that the structure
2 b fixed bin(31),
3 d char(1),
3 e fixed bin(15),
2 f fixed bin(31);
even though the size of E would be unchanged, the structure would now contain some padding bytes,
and the RULES(NOPADDING) option would alert you to this (and if you also specified the AGGREGATE
option, the compiler listing would show you where those padding bytes were).
For many years, the only floating point representation on z/OS was hexadecimal float. This is a base 16 representation, but most of us have 10 fingers and most business applications want to perform decimal calculations.
The difference between these 2 bases leads to problems as exemplified by this code:
dcl f1 float dec(6);
dcl f2 fixed dec(5,3);
dcl f3 float dec(6);
f1 = 4;
f2 = f1 / 100e0;
put skip data( f2 );
f3 = 100 * f2;
put skip data( f3 );
This rather disconcerting result says that 100*(4/100) = 3.9
Similar problems exist with IEEE binary floating-point (which PL/I fully supports on z/OS).
However, IBM and Enterprise PL/I for z/OS
also support the new IEEE decimal floating-point.
The accompanying hardware lets you perform float calculations as you would with your fingers: it is a true base 10 representation that exploits the speed
of floating point computations as well as the availability (on z/OS) of 16 floating-point registers.
For more information about how to use decimal floating-point with PL/I, see the DFP suboption of the FLOAT compiler option.
In PL/I conversions, BINARY rules over DECIMAL, and FLOAT, over FIXED.
This means, for example, that when an expression contains a BINARY and a DECIMAL operand, then
the result will be BINARY.
Hence, for the assignment in this code
dcl a fixed bin;
dcl b float dec;
b = b + a;
there will be these conversions
a will be converted to FLOAT BIN
b will be converted to FLOAT BIN
the sum will be converted to FLOAT DEC
PL/I has always had a series of built-in functions (BINARY, DECIMAL, FIXED, and FLOAT) to help
control such conversions.
But if you were to change the sample assignment above to
b = b + decimal(a);
then there will be these conversions
a will be converted to FIXED DEC
that result will be converted to FLOAT DEC
the sum will be simply assigned to FLOAT DEC
So one conversion has been eliminated, but there are still two conversions when one should suffice
The problem is that these functions allow you to specify only one attribute at a time when
you would like to be able specify two
Enterprise PL/I 3.8 introduced 4 new built-in functions so you can do this:
FLOATDEC, FLOATBIN, FIXEDDEC, and FIXEDBIN
Now you can rewrite our assignment to
b = b + floatdec(a);
and not only will this code perform faster, it will be easier for someone to understand
and maintain in the future.
If you use the TEST option and DebugTool, Enterprise PL/I will show you the source as you coded it.
So, for example, if you have code that contains an EXEC SQL statement, you will see in the
debugger just that EXEC SQL statement as you coded it and not the many, many statements generated by the
preprocessor and visible in the listing. This is a big plus since those generated
statements will be largely meaningless to you and would certainly not be ones
you should be debugging and fixing.
The same would be true if you were using the CICS or MACRO preprocessors.
However, since this is PL/I, you can have it both ways: if you do want to debug at the level of the generated code,
then you can do so by using the TEST(SEP) and LISTVIEW options.
This can be particularly useful if you are using the MACRO preprocessors to generate complex code. In that case,
the LISTVIEW(AFTERMACRO) option would let you debug at the level of
the code generated by the MACRO preprocessor (with any EXEC CICS and EXEC SQL statements left
unexpanded if the MACRO preprocessor preceded the CICS and SQL preprocessors).
Do you want to prove the depth of your PL/I knowledge to a prospective employer? Or do you
want to verify the PL/I skills of a company to whom you might outsource your PL/I code?
The PL/I certification, developed jointly by IBM and representatives from PL/I
companies from Europe and the US, can help with these and similar tasks.
IBM PL/I professional certification
site you will find that there are two certification levels for PL/I: one for the general PL/I programmer in your team and a harder test
for the leader of that PL/I team.
Check it out.
The history of PL/I stretches back to the 1970s when IBM, at the prompting of the SHARE user group, delivered the first PL/I compiler so that users would have a language with the combined strengths of Fortran and COBOL. IBM then delivered three releases of OS PL/I Version 2 in the 1980s and PL/I for MVS and VM in the early 1990s. All of these releases were based on a common code base that gradually became old, plus both hard and expensive to enhance and maintain.
However, with the advent of the PC, IBM built a completely new PL/I compiler that was shipped first on IBM® Operating System/2® (OS/2) and then ported to Microsoft Windows, the IBM® AIX® operating system, and the mainframe. With this compiler and the 12 consecutive years with a new release of the IBM® z/OS® version, IBM has improved optimization of PL/I programs, enhanced their exploitation of IBM® System z® architecture, addressed many customer requirements, and introduced numerous application modernization features.
IBM continues to have a strong commitment to the PL/I language, particularly given its widespread use in many business-critical applications. The current family of IBM PL/I implementations consists of the Enterprise PL/I compiler for the mainframe and the PL/I for AIX compiler, both of which share a common, nearly identical front-end code base, which ensures portability between those platforms.
We are opening this forum to create more direct communication between users of our PL/I compilers and the IBM compiler development organization. We hope you'll find the content informative and interesting, and we look forward to your contributions through questions, comments, and ideas.
Want to experience PL/I for AIX, productive and powerful development environment for building PL/I applications.
Check out these features in PL/I for AIX, V3.1:
- Provides improved performance via both front-end changes and back-end optimizer enhancements
- Provides an improved debugger that enables you to conveniently debug programs form your windows-based workstation
- Improves the MACRO preprocessor
- Provides improved support for SQL and CICS
- Leverages productivity with new and improved built-in functions
- Increases quality control with new and improved compiler options
- Boosts serviceability with new diagnostics
Download a 60-day evaluation of PL/I for AIX, V3.1 today
Check out all the information about the latest release of PL/I for AIX. You can get a summary of the release as well as all the announcement details.
See what's new with PL/I for AIX V3.1
including enhancements to leverage the latest POWER7 hardware and interoperability with the latest middleware.
For all the details about the PL/I for AIX, V3.1 announcement, see IBM PL/I for AIX , V3.1 delivers support for the latest IBM POWER7 Systems architecture as well as many functional improvements and usability enhancements
There are many good things about the new PLI for AIX 3.1 release, but the biggest is that
PL/I for AIX is now fully up-to-date: it had been stuck at its last release in 2004, but
it now has all the language features added to Enterprise PL/I (and PL/I for Windows) in the
years since then.
Recent blog entries point to documents with all the details of what's in the 3.1 release,
but in brief, PL/I for AIX, Enterprise PL/I, and PL/I for Windows
support the same compiler options
including all the RULES suboptions to enforce code quality
accept the same syntax (and generate the same messages for incorrect programs)
including the INONLY, INOUT, and OUTONLY attributes for parameters
support the same built-in functions
including all the powerful UTF string-handling functions
contain the same performance enhancements (when not tied to the platform)
including all the inline mapping of structures with multiple REFER
There are three, perhaps overlooked, new features of Enterprise that let you "compile out" code either
unconditionally or conditionally:
To cause the compiler to skip some code unconditionally, you could try to enclose it in comments. But this works
only if that code itself contains no comments. However, as of the 3.9 release, you can enclose code
in %DO SKIP; ... %END;, and then the compiler will unconditionally skip over the enclosed code.
To cause the compiler to skip over some code conditionally, you can enclose it in a %DO; ... %END;
is conditional on the value of the SYSPARM
compiler option. For example, this code
%if sysparm = 'test' %then %do;
put skip list( procname() || sourceline() );
would be compiled into your object deck only if you specified SYSPARM('test') as a compiler option. Using
this option in this manner makes it easy to compile a
production or a test version of your code from the same source file.
And if all your test code consists only of PUT FILE statements - and if PUT FILES are not used elsewhere in
your code, you could cause the compiler to skip over that code conditionally by using the IGNORE( PUT ) compiler
option introduced in the 4.1 release.
Code reviews were once common practice, but at some companies they have been reduced or even eliminated entirely. Often this
has been done in the name of cost savings even though the earlier a bug is found in the application life cycle, the less it costs.
Also, if you are still conducting code reviews, your coworkers are probably not perfect: they can miss
some errors and can overlook violations of your coding standards .
Fortunately, the Enterprise PL/I compiler can help you with the latter:
You can change the defaults for the RULES compiler option to detect a wide variety of poor coding practices.
Declaring all variables is usually required in all professional code, but by default, the compiler issues only
an information message for undeclared variables (which means the compilation, if it had no other problems, would
end with a return code of 0).
However, under the RULES(NOLAXDCL)
option, the compiler will flag all undeclared variables with an error
message that would cause the compile to end with a return code of 8 at least.
There are many other suboptions to the RULES option, and their default setting is to cause the compiler to behave as
the old OS PL/I and PL/I for MVS compilers did. But your code quality would be guaranteed to be better if you changed
some of these defaults to match your coding standards.
Please examine these suboptions and turn on those (such as NOLAXDCL) that will help you.
And if you have a coding standard that these suboptions do not check, please submit a requirement for it so
that we can add it in a future release.
Under the old OS PL/I and PL/I for MVS compilers, all extents (i.e. bounds and string lengths) for STATIC variables
and for BASED variables (not using REFER) had to be optionally signed integers. This made it hard to parameterize your
code (unless you used the macro preprocessor and then your listing would look much different than your source).
With Enterprise PL/I however, extents are required only to be what are known as "restricted expressions", i.e. expressions
that the compiler can reduce to a constant at compile time, and variables declared with the VALUE attribute may be freely
used in those expressions.
So, if you want to declare the variables old_name
to both have length 100, you could now use
dcl old_name char( 100 );
dcl new_name char( length(old_name) );
or, perhaps better,
dcl name_size fixed bin(31) value( 100 );
dcl old_name char( name_size );
dcl new_name char( name_size );
And note that if you want to change the lengths of these variables, since that length has been parameterized,
you just have to change one declare.
Also, if you then wanted to declare a variable big_name
to be 4 times as long (perhaps because it will hold the
utf-8 version), you could do this nicely as
dcl big_name char( 4*name_size );
The Enterprise compiler will reduce any arithmetic expression to a constant if the operands are constant, and it will
also reduce almost all built-in functions references whose arguments are constant expressions.
So if you wanted to declare 200-element arrays of the above names, you could do this cleanly via
dcl name_size fixed bin(31) value( 100 );
dcl max_names fixed bin(31) value( 200 );
dcl old_name( max_names ) char( name_size );
dcl new_name( max_names ) char( name_size );
Also, if you want to initialize the first element of the last array to the value "default" and all the
other elements to a null string, you could do this via
dcl new_name( max_names ) char( name_size ) init( 'unknown', (199) ( '' ) );
but it would be better to let the compiler do the math and to write
dcl new_name( max_names ) char( name_size ) init( 'unknown', (max_names-1) ( '' ) );
and perhaps best of all would be the even simpler and sleeker
dcl new_name( max_names ) char( name_size ) init( 'unknown', (*) ( '' ) );
Let the compiler count (as in the last declare above) and let it do arithmetic (as in the
penultimate example) while you write code that is both more elegant and easier to maintain.
Check out all
the information about the latest release of Enterprise PL/I for z/OS. You can
get a summary of the release as well as all the announcement details.
FREE newsletter - get IBM Software news, special #PowerSystems features - subscribe: http://bit.ly/rs1rZn #POWER7
One of the new options introduced in the new 4.2 release of Enterprise PL/I
is the UNROLL option. However, to understand it, you first need to understand
what the compiler does with loops.
Usually, the compiler turns a DO loop into a sequence of instructions which is
followed by a test and then a conditional branch back to repeat those instructions
with some updated values (and the sequence of instructions may be preceded by a
conditional test to see if the loop should be run at all).
In some situations, the optimizer can make this generated code run faster by "unrolling"
the loop. This means that instead of generating the instructions described above, the
optimizer will eliminate the conditional branches (which are relatively expensive) and
instead duplicate the loop body with the updated values (or it may do some of both).
option lets you control this. For example, given this code
dcl a(10) fixed bin(31) connected;
dcl jx fixed bin(31);
dcl sum fixed bin(31);
sum = 0;
do jx = 1 to 10;
sum += a(jx);
, the compiler will not unroll the loop, and the code generated would look like
000046 5810 1000 L r1,_addrA(,r1,0)
00004A 41F0 0000 LA r15,0
00004E 41E0 0004 LA r14,4
000052 4100 000A LA r0,10
000056 A71A FFFC AHI r1,H'-4'
00005A @1L2 DS 0H
00005A 5EFE 1000 AL r15,_shadow1(r14,r1,0)
00005E 41E0 E004 LA r14,#AMNESIA(,r14,4)
000062 A706 FFFC BRCT r0,@1L2
But under UNROLL(AUTO)
, it would generate the longer, but faster
000046 5810 1000 L r1,_addrA(,r1,0)
00004A 58F0 1000 L r15,_shadow1(,r1,0)
00004E 5EF0 1004 AL r15,_shadow1(,r1,4)
000052 5EF0 1008 AL r15,_shadow1(,r1,8)
000056 5EF0 100C AL r15,_shadow1(,r1,12)
00005A 5EF0 1010 AL r15,_shadow1(,r1,16)
00005E 5EF0 1014 AL r15,_shadow1(,r1,20)
000062 5EF0 1018 AL r15,_shadow1(,r1,24)
000066 5EF0 101C AL r15,_shadow1(,r1,28)
00006A 5EF0 1020 AL r15,_shadow1(,r1,32)
00006E 5EF0 1024 AL r15,_shadow1(,r1,36)
This is a very simple example, and the code under the default setting of
UNROLL(AUTO) is probably better.
But note that even here the unrolled code is larger (if only by a bit).
If the code needed inside the loop were bigger, unrolling the loop could
significantly increase the object deck size.
There is no one correct setting for this option. You will have to decide
what is best for your code: do you want the optimizer to decide which loops
to unroll (that's what the default setting does and what all the previous
releases did) or do you want to turn off all loop unrolling?
Join our experts, Ray Jones, Vice President, IBM System z® Software, and Kevin Stoodley, IBM Fellow and CTO for Enterprise Modernization Tools, Compilers and Security, to learn how IBM’s latest compilers, middleware and tools can help you stay on the technology curve. In this complimentary webcast, Ray and Kevin will discuss best practices and approaches to plan and execute a successful compiler migration, alongside CICS®, IMS™ and DB2® upgrades. They will also go over IBM’s strategy for compilers and tools on System z to help you better plan your overall development and upgrade efforts.
Register online right now
Register now for this webcast by logging onto
Join us after the webcast for a live question-and-answer session. The webcast will also be available for replay after the event.
This blog entry is the first of two articles that will provide some guidance on how to work
with DB2 large objects (LOBs) in PL/I. They both refer to the 'pliclob.pli' file in the PL/I Cafe 'Files'
section for samples of actual code.
One way to use LOB data from a DB2 table is to declare a host variable
large enough to hold all of the LOB data. This requires your program to
allocate large amounts of storage and requires DB2 to move large amounts
of data. This can be inefficient or impractical.
Or you can use LOB locators and LOB file references to manipulate the data
while it still resides in the data base.
LOB Locators are used to avoid materialization of the LOB data and all
the underlying requirements associated with it.
The benefits of using LOB locators are:
- saving storage when manipulating LOBs with LOB locators
- manipulating data without retrieving it from the data base
- avoiding the use of large amounts of storage to hold the LOB
- avoiding the time and resource expenditures for moving large
pieces of data thereby improving performance.
LOB locators are especially useful:
- when you only need a small part of a LOB
- when you don't have enough memory for the entire LOB
- when performance is important
- in a client/server environment to avoid moving data over the
network from one system to another
Look at the pliclob sample program in the 'Files' section of the
PL/I Cafe for some ideas on how to manipulate CLOBs in a PL/I and
For example, the pliclob sample program uses LOB locators to identify
and manipulate sections of the character large object (CLOB) resume
found in the DB2 V10 table dsn8a10.emp_photo_resume.
In the following code sample, extracted from the pliclob sample program,
the LOB locator 'hv_loc_resume' is set to the location of the resume of
the employee number 'hv_empno' in the emp_photo_resume table. Next the
start_resume host variable is set to the beginning of the 'Resume:'
section of the resume.
dcl hv_loc_resume sql type is clob_locator;
select resume into :hv_loc_resume
where empno = :hv_empno;
set :start_resume = (posstr(:hv_loc_resume, 'Resume:'));
From here it is possible to start manipulating the resume data while
the resume is still resident in the data base. For greater detail,
refer to the pliclob sample program.
In the previous blog entry we showed how Character Large Objects (CLOBs)
can be manipulated with LOB locators. In this blog we will discuss
the use of LOB file references.
LOB file reference variables are also very useful when working with
LOBs. The are used to import or export data between a LOB column and
an external file outside the DB2 system.
The benefits of using LOB file reference variables are that they:
- use less CPU time than moving LOB data with a host variable
because the movement of the data would not be overlapped with
any DB2 processing or network transfer time.
- use less application storage because the LOB data is moved
directly from DB2 to a file and is not materialized in the
The pliclob sample program uses LOB file references to
create a new, trimmed down version of the resume in an external file.
In the pliclob sample program the host variable hv_clob_file is
declared as a LOB file reference.
The file name field of the LOB file reference is set to the fully
qualified file name and the file name length is set to its length.
For this example the 'overwrite' flag is set so any existing file
will be overwritten. These and other options are described fully
in the DB2 publication 'Application programming for SQL'.
Next the SQL VALUES statement is used to concatenate the resume name
and work history sections of the resume directly into the LOB file
You can see this in the following code sample, extracted from the
pliclob sample program.
dcl hv_clob_file sql type is clob_file;
name_string = '/SYSTEM/tmp/pliclob2.txt';
hv_clob_file.sql_lob_file_name_len = length(name_string);
hv_clob_file.sql_lob_file_name = name_string;
hv_clob_file.sql_lob_file_options = ior(sql_file_overwrite);
values ( substr(:hv_loc_resume,:start_resume,
Now go and have some LOB fun yourself!
To see all of these techniques in context, please
refer to the pliclob sample program.