Using DEDBs
Using DEDBs can provide performance improvements in a number of areas, including a reduction in path length, parallel processing capability, less I/O processing, and a reduced logging overhead.
- Reduced path length
- DEDBs use Media Manager for more efficient control interval (CI) processing, which can reduce pathlength.
- DEDBs have their own resource manager, which means:
- Less interaction with whichever lock manager you are using (PI or the IRLM), provided you are not using block level sharing.
- Simplified buffer handling (and reduced pathlength) because DEDBs have their own buffer pool.
- Parallel processing DEDB writes are not done during the life of the transactions but are kept in buffers. Actual update operations are delayed until a synchronization point and are done by asynchronous processing using output threads in the control region. The output thread runs as a service request block (SRB): a separate dispatchable MVS task. You can specify up to 255 output threads. This means that:
- The CICS® task can be freed earlier
- Parallel processing is increased and throughput on multiprocessors is improved.
- Less I/O
The cost of I/O per SDEP segment inserted can be very low because SDEP segments are gathered in one buffer and are written out only when it is full. This means that many transactions can
share the cost
of SDEP CI writes to a DEDB. SDEPs should have larger CIs to reduce I/Os. - Reduced logging overhead
DEDB log buffers are written to OLDS only when they are full. This means less I/O than would be needed with full function databases.