Topic
1 reply Latest Post - ‏2012-11-19T23:18:16Z by BillWoodger
SamKnutson
SamKnutson
1 Post
ACCEPTED ANSWER

Pinned topic maximum record size > 32767 IGYGR1224-E

‏2012-11-19T19:23:34Z |
Hi,

We have an old program with a sequential tape file that is being pushed by the data elements growth to need more than 32K in each records. The programmer asked if we had any workarounds for IGYGR1224-E.

My intial suggestion was that they could migrate the active data into a database like DB2 or IMS but this is an old system which they would prefer to avoid significant development on. The files are very large with multiple copies

The file is an ordinary sequential file with old, new master file type processing i.e.

FD NEW-DATABASE-FILE
LABEL RECORDS ARE STANDARD
BLOCK CONTAINS 0 RECORDS
RECORDING MODE IS V
DATA RECORD IS NEW1-RDDB-REC.

We are at the most current releases Enterprise COBOL 4.2 on z/OS 1.13

I am not aware of any relief that would help them out here. We would look at LBI if it would help but it appears that this would not actually help in this case. They need to process this in a simple COBOL program. Any suggestions that have worked well for other customers in similar constaint?

Best Regards,

Sam Knutson, GEICO
System z Team Leader
mailto:sknutson@geico.com
(office) 301.986.3574

"Think big, act bold, start simple, grow fast..."
Updated on 2012-11-19T23:18:16Z at 2012-11-19T23:18:16Z by BillWoodger
  • BillWoodger
    BillWoodger
    83 Posts
    ACCEPTED ANSWER

    Re: maximum record size > 32767 IGYGR1224-E

    ‏2012-11-19T23:18:16Z  in response to SamKnutson
    32767 is a QSAM limit.

    Although not with those record sizes and the type of volume you are indicating, I've done similar by having "split" records on the physical file, and "stitching them together" in a sub-routine which is actually doing the reading. Main program "reads" a logical record via the sub-routine. Sub-routine does one or more reads of actual records and puts together the bigger record. One the "way out" the stitching is "unpicked" to put one or more actual records per logical record.

    You might look at VBS which is the system behaving similarly, but there may be more flexibility in doing it yourself.

    I'd try to make the actual records well below 32k. If you have to split you may as well get far away from the limit and allow easier ad-hoc processing of the file.

    Watch out to minimise the tossing-around of the data if you go for this.

    If it runs like a dog with no legs, there are probably ways around that.