Topic
No replies
SystemAdmin
SystemAdmin
93 Posts
ACCEPTED ANSWER

Pinned topic Module Size with POSITION and SUBSTRING

‏2006-02-07T11:27:04Z |
DataStage 390. CSV input from a spreadsheet has numeric fields that contain commas, e.g. '1,111,111,111.22'. Sometimes have a blank on one end or the other. Use POSITION in a stage variable to find the first embedded comma and string together everything before and after that. A second stage var finds the second comma, if there is one, giving '1111111,111.22'. Third stage var finishes it and is used in the output link. So far, no numbers over 10 billion.

StageVar 'trimALOS'
TRIM(BOTH '"' FROM Inp01.ALOS )
StageVar 2
if POSITION(',' IN trimALOS )> 0 then
SUBSTRING( trimALOS
FROM 1
FOR POSITION(',' IN trimALOS ) -1)
|| SUBSTRING( trimALOS
FROM POSITION(',' IN trimALOS) +1
FOR 11 - POSITION(',' IN trimALOS ))
else trimALOS end

Size of the module jumped from 777KB to 18,000KB to handle commas that way in 12 fields. The library management system has problems handling something that big, but we've learned to fool it.

Is there a way to get the size back down?

Could write and call an external COBOL with an efficient subroutine to remove them. But I have 126 DS jobs with different input formats.

Could use overlays in LINKAGE for the 126 formats, but still need 126 sets of moves. Almost unmaintainable

Could hardcode field displacements and lengths in the records that I pass and let COBOL crawl through each character to remove the extra commas. But, I'm afraid of trying to test that or finding anyone who could maintain it.

Is there another way besides getting them to change the input?