New Batch Residency
MartinPacker 11000094DH Visits (1983)
In October Frank Kyne and I expect to run a residency in Poughkeepsie. You can find the announcement here.
The residency builds on the ideas presented here and three subsequent posts.
I revisited a specific part of it in Cloning Fan-In.
So what are we going to do?
For a start we're going to assemble a team of 4 skilled mainframe folks from wherever we can. One of them will be me, which leaves 3. You could be one of those - but only if you throw your hat in the ring.
We're looking for three distinct roles:
Actually there's some flexibility in these last two roles: COBOL / VSAM and PL/I / DB2 would work just fine.
But I still haven't told you what we'll actually do...
We aim to teach people how to successfully clone individual batch jobs - through examples and guidance.
How We'll Do It
The referenced blog posts describe some theory. This residency will write a Redbook that'll describe the practice.
We'll create two test cases that we'll assert want cloning. They'll process a large number of records / rows - in a loop. This is a very common application pattern: If you can think of another one we'll entertain it.
One program will be written in COBOL and the other in PL/I. Hence the programming skill requirements.
One will access DB2 data primarily, the other mainly VSAM. Which explains those two skill requirements.
So the first few days will create these baselines - and measure them.
Then we'll investigate cloning - 2-up, then 4-up, then 8-up, etc..
Why This Is Non-Trivial
If these cases were read-only this might be trivial. If these cases didn't write a summary report at the end the same might be true.
But we won't make it so easy on ourselves:
All these reflect real life problems people will have.
And if the residents can think of some more pain to inflict on ourselves we will.
As I mentioned in Cloning Fan-In many programs produce a report. This is easy before cloning. With cloning it's much harder. So we need to exercise that.
But I posited a modified architecture: Create data files and merge them in a separate reporting job. JSON could be involved, and so could XML - as those files should be modern. I say that because one benefit of cloning a job could be making the reporting data available to other consumers.
If we have time someone could explore this.
Why We Need A Scheduling Person
First, real life would require you to integrate a cloned job into Production: Scheduling one job, complete with recovery is one thing. Scheduling cloned jobs is another.
Second, it's not enough to succeed once in cloning a job: Installations will want to automate splitting again and maybe even dynamically decide how many clones. (And maybe where they'll run.)
The TWS person will not only do scheduling but figure out how best to structure the JCL. Though not the main thrust of this residency, z/OS 2.1 will have JCL enhancements I guess to be useful here. We'll have 2.1 on our LPAR so you can play with this.
If you've never played with BatchPipes/MVS I expect you'll get to try it out, too.
While we overtly state this is not a formal benchmark, we'll take lots of measurements and tune accordingly.
This I'm expecting to play the main role in.
The idea of this is to deliver practical guidance through real life case studies. So there'll be a book and maybe a presentation.
We'll document what we did, what issues arose, how we resolved them, and what we learnt. And this will draw on all our perspectives.
As the application programs aren't the main deliverable they'll probably go in appendices. Tweaks we have to make to the code, JCL and schedule will be highlighted. Reporting requirements will also be described.
I think this will be a lot of fun. I also think the contact with Development will be fruitful.
So I invite you all to consider applying. Nominations close 5 July.