Average number of I/O operations per database record
Obviously, HDAM randomizing can be considered to be efficient if the average number of I/O operations required to read at random all database segments of one database record is low. The average number of I/Os required to read at random all database segments of one database record is one of the most important indicators of the quality of the randomizing.
This average number of I/Os is printed by the Database Tuning Statistics as shown in Figure 1, to the right of the phrase
AVG NBR I/O: - PER DB-R.
By looking at this average number in the Database Tuning Statistics, the database administrator can very rapidly see whether a database is efficiently randomized.
For an ideal database consisting of one single data set group, this average number would be 1.0.
In real life, this ideal value of 1.0 can seldom be achieved and the average number of database
I/Os per database record will be higher. Note, that due to the
law of the large
numbers it is easier to achieve a good randomizing value when the block
size, divided by average database record length, is large.
As a general rule of thumb, the database can be considered as fairly well randomized if the average number of I/Os per database record is below:
- 1.20 (for databases with an average database record length below one tenth of the block or CI size)
- 1.30 (for databases with larger average database record lengths)
For numbers above 1.20/1.30, the database can often be considered to be poorly randomized (unless the database record length is large). In this case, the database administrator can use other numbers provided by the Database Tuning Statistics in order to determine the reason for the poor randomizing. These other numbers in the Database Tuning Statistics can be used to give an answer to the questions in the following checklist and to find, in most cases, the reason for poor randomizing:
- Is the size of the root addressable area appropriate? (See Packing density of the root addressable area for details.)
- Is the number of RAPs appropriate? (See Number of RAPs per root segment for details.)
- Is the HDAM bytes limit appropriate for the database record lengths in this database? (See Bytes limit for details.)
- Is the block size or CI size appropriate for the database record lengths of this database? (See CI size and block size for details.)
- Is the database record length excessive? (See Databases with long database records for details.)
- Is the amount of Free Space specified during DBDGEN equal to zero? (See Free block frequency factor for details.)
- Is a database reorganization overdue in order to reduce database segment scattering in the overflow area? (See Periodical database reorganization for details.)
- For a Sequential Subset Randomizer: Is the relative amount of DASD space allocated to each subset of database records appropriate? (See Inefficient space suballocation for the Sequential Subset Randomizer for details.)
- Any reference to the Sequential Subset Randomizer in these topics is not intended to state or imply that this product should be used instead of the standard DFSHDC40 randomizing module.
- HSSR Engine does not know which database record occurrences are the most frequently accessed. For some databases, it is sometimes the longest database records requiring the largest number of I/Os that are the most often accessed database records. In this case, the average number of I/Os per database record reported by HSSR Engine is different from the average number of I/Os per accessed database record.
- HSSR Engine does not know which segment types are the most frequently accessed. With some
databases, it is sometimes only one or two segment types that are often
accessed. In this case, the job step used to produce the Database Tuning
Statistics can be run with a PSB, which is sensitive only to these two
segment types. The Database Tuning Statistics of such a job will probably be
more representative than the Database Tuning Statistics of a job that was
sensitive to all segment types.
For detailed statistics about the
probability of I/Ofor each segment type, see the Segment and Pointer Statistics report of IMS High Performance Pointer Checker.
Discussion of the example
In the example of Figure 1, the average number of I/Os per database record is 1.86. This number is substantially higher than the target values of 1.20 and 1.30.
A quick glance at the average database record length shows that the average database record length is not large (1/6th of the CI size); therefore the reason for the poor average number of I/Os per database record is not large database record lengths. Hence it seems probable that the high average number of I/Os per database record is due to poor randomizing parameters and that the average number of I/Os can be substantially reduced. The next topics show how the Database Tuning Statistics can be used to understand why this database is currently inefficiently randomized, and how the randomizing can be improved.
Note that both the average number of I/Os per database record in the root addressable area and in the overflow area are far from their ideal values. They are 1.46 for the root addressable area (RAA) and 0.40 for the overflow area (the ideal values being 1.00 and 0.00). A high value in the RAA is often an indication that the packing density of the root addressable area is too high (see Packing density of the root addressable area). A high value of I/Os in the overflow area is often an indication that the bytes limit is too low (see Bytes limit for details).