• 2 replies
  • Latest Post - ‏2010-07-14T03:05:20Z by DonBagwell
6 Posts

Pinned topic DRAFT of WAS z/OS Techdoc for Batch Feature Pack

‏2010-07-13T19:44:15Z |
Here is a preliminary DRAFT copy of a planned Techdoc that spells out the steps to install, configure and use the Feature Pack for Modern Batch with WAS z/OS.

The Feature Pack applies to all supported WAS platforms. This planned Techdoc applies only to WAS z/OS platform.

Please note:

o This is draft copy, and content may change
o The client code to exercise the RMI access to the scheduler is undergoing modifications and is not available at this time.
Updated on 2010-07-14T03:05:20Z at 2010-07-14T03:05:20Z by DonBagwell
  • DonBagwell
    6 Posts

    Re: DRAFT of WAS z/OS Techdoc for Batch Feature Pack

    Here's a presentation I've worked up for the broader topic of "Java Batch".

    My experience was that we face two major objections:

    1. There's no reason to do Java when COBOL works well, or
    2. "I have my own custom JZOS batch ... it works well enough."

    The first objection seems to overlook the trends in the industry. Yes, COBOL is very good, and for existing COBOL batch that serves the business there's no need to rip-and-replace. Rock on with what works, brother ... that's my motto.

    The second objection seems to overlook the fact that there's a large exposure to going down the path of developing more and more custom middleware code. It's a subtle and very slippery path. A little here ... a little there ... and before you know it there's a very large custom batch infrastructure to support.

    At any rate, this is my humble attempt to position all the variables into an understandable framework. Let me know if this is just crazy talk or if I'm on the right track here. :-)

    I haven't yet had the chance to "speaker note" this presentation. That's on my to-do list.
  • DonBagwell
    6 Posts

    Re: DRAFT of WAS z/OS Techdoc for Batch Feature Pack

    My personal experience regarding the readying of the environment for WAS z/OS:

    o Getting the batch.wct extension installed into the WCT is just a tad quirky. But the draft Techdoc outlines the step-by-step and provides a recovery description in case things don't quite go the way you hoped.

    o The creation of the augmentation jobs is simple -- a snap.

    o The running of the augmentation jobs is simple -- also a snap. The nice thing here is you can augment already-federated nodes. Thank goodness they lifted the earlier restriction on augmenting federated nodes.

    o Enabling the scheduler and batch end points is just a little different in that you don't install any applications. It does that automatically. That bothers the control-freak inside of me. :-) But it seems to work well. Again, the draft Techdoc walks you through the step-by-step.

    o The creation of the database stuff in DB2 z/OS is relatively easy provided you have a comfort level with that sort of stuff. I'm not that strong on STOGROUPs and such, so I fumbled a bit. But I got it to work.

    o The JDBC provider / data source stuff is business-as-usual. The key difference is that they need to be scoped to the cell level. That's because the DMGR interacts with the scheduler and end point servers during setup. It's a bit of a black-box to me, but I took the work of Chris V. and did as he suggested and things worked well.

    Validating the environment is spelled out in the draft Techdoc.

    Look ... setting up the environment and running the IVT programs is relatively simple. The real objective is getting your own batch programs running -- simple at first, increasingly more complex as you get your feet under you.

    Sadly, I'm pretty darn weak in Java ... but I'm working on it. I do aim to get better at coding batch Java to the Feature Pack Batch Data Stream framework. If I can get the equivalent of a "HelloWorld" to work, I'll be happy. For a bit. Then I'll want to do more :-)