z/OS DFSMStvs Planning and Operating Guide
Previous topic | Next topic | Contents | Contact z/OS | Library | PDF


Applying advanced application development techniques

z/OS DFSMStvs Planning and Operating Guide
SC23-6877-00

If you use DFSMStvs for more than just running batch jobs that are in a CICS® batch window, this topic can help you apply advanced techniques for coding your application.

You need to convert batch jobs that use DFSMStvs to update recoverable data sets. This involves the following tasks:
  • Modifying jobs to use sync points and take appropriate action (commit or backout)
  • Modifying jobs to use RRS services to invoke commit and backout processing
  • Modifying jobs to specify that DFSMStvs is to be used
  • Examining jobs to ensure that multiple RPLs in use within a single application does not cause lock contention within a unit of recovery
  • Coding jobs to handle loss of positioning. Loss of positioning might occur at commit or backout for unpaired requests (a GET UPD request is not followed by a PUT UPD request, or an IDALKADD request not followed by a PUT NUP request)
  • Examining jobs to ensure that any new reason codes are handled appropriately.
  • If any applications act as work dispatchers, examining jobs to ensure that all work intended to be part of a single unit of recovery is handled under the same context

Another reason to use more advanced techniques would be if you would like to split a batch process into several jobs. By splitting a batch process, originally written to be a single threaded job, you can use DFSMStvs and run the jobs in parallel.

Another reason to use more advanced techniques would be if you have a batch process that was originally written to be a single threaded job, but using DFSMStvs, you would like to split it into several jobs that run in parallel.

For example: You have a recoverable data set that contains 100 000 records, and a batch job that processes all the records and updates them. Assuming that the updates are independent of each other, you might split the batch job into four jobs. The first job would process records 1-25 000. The second job would process records 25 001 - 50 000. The third job would process records 50 001 - 75 000, and the fourth job would process records 75 001 - 100 000. Alternatively, your batch job could run as a mother task which attached four subtasks. The mother task would give work to each of the subtasks, and as each subtask completed its job, the mother task would give it more work. Using this approach, each of the subtasks would be dealing with an independent context and unit of recovery.

Go to the previous page Go to the next page




Copyright IBM Corporation 1990, 2014