Many customers load DLA books generated by a variety of sources including the z/OS DLA, ITNM, and custom sources like contacts from Active Directory.
Most often, customers simply use the built-in loadidml.sh or loadidml.bat pointed either to a file or to a directory where the books reside. If pointed to a directory, the loadidml.sh script simply loads all the books serially. Usually, the bottleneck is not the database server meaning that it should be possible to run several bulkloads concurrently. There are two obvious options for improving bulkload throughput:
- If there are multiple storage servers in the TADDM environment (streaming mode), bulkloads can be performed on all (or a subset) of the storage servers simultaneously.
- If there is sufficient memory and CPU available on a storage server, several bulkloads can be performed on a single server simultaneously.
I have written some wrapper scripts to perform the parallel bulk loads. The scripts work by checking for new books in a specific directory known as the book repository (/home/taddmusr/DLA_BOOKS in the attached scripts).
You can either configure different DLA sources to deposit the books on different storage servers or you you can mount the book repo on all your storage servers. Mounting the book repo on all storage servers will result in the largest reduction in time to load the books.
This solution also records meta data about each bulk load, allowing you to produce statistics on the loads.
Installing the tooling
Assuming the TADDM service account is called "taddmusr", extract the processBooksTooling.tgz file to the $HOME/scripts directory.
Edit the processBooksStartup to set the following variables:
PARALLELISM=5 - this defines the number of concurrent bulkloads to run on this storage server. Start with this parameter set at 2 and verify that everything works as expected. Continue to increase this on all storage servers until a point of diminishing returns is reached.
USER=taddmusr - this parameter is only used if the tooling is run as root. The script will su to the named user before running the thread.
STARTSCRIPTDIR=/home/taddmusr/scripts - the directory where the two scripts were extracted
WASSEEDDIR=$COLLATION_HOME/var/dla/zos/was - the IZDJSEED utility run on the z/OS LPAR creates files that are not DLA books, but rather, TADDM "seed files" which are used by TADDM to determine how to communicate via JMX to the WebSphere instances on the mainframe LPARs. If you don't have WebSphere servers running on your mainframe LPARs, there is no need to set this variable.
LOADOPTIONS="-u administrator -p collation -o" - These are the options to pass to the loadidml.sh. It is good practice (although currently unnecessary) to pass the username and password. The -o tells loadidml to process this file even if it has been previously processed (has an entry in the $COLLATION_HOME/bulk/processedfiles.list). Another common option is "-g". This tells the bulk loader to persist "groups" of objects all at once. It results in substantially faster loads but it has the drawback that a single malformed object in the book will cause a whole group of objects to fail to persist. It is wise to make sure the loads work for a period of time before enabling this option.
STATUSSCRIPT="/etc/init.d/collation" - the status of the TADDM server is checked prior to looking for new DLA books. If the server is not in "Running" state, then the threads immediately exit.
Optionally, add a cron entry similar to the following:
33 * * * * /home/taddmusr/scripts/processBooksStartup.sh start
If you don't add a cron entry, then you will have to remember to start the processBooksStartup.sh script each time the TADDM server is restarted.
How it works
When you run the processBooksStartup.sh script, it will launch "n" copies of the processBooks.sh script (see PARALLELISM variable above). Each of these threads will sleep for 60 seconds before looking for files in the Book Repository with names in the format <something>.xml. If a file is not found, the thread will resume the sleep/check cycle. If a file is found, it will be renamed to <something>.xml.<threadNumber>.loading. Since renaming a file is an atomic operation, even if two threads pickup the same file at the same time, only one of them will succeed in renaming the file. The other one will fail and simply go back to looking for any new books.
If the DLA book is successfully renamed, the thread then runs the loadidml.sh script with the options specified above. The output from the loadidml.sh script is mostly useless, so it is redirected to the file /tmp/load.<pid> where pid is the process ID of the thread.
Load Meta Data
The following information is captured for each DLA book load:
- DLA Book name
- DLA Book size
- Start time of the book load
- End time of the book load
- Return code of the book load (0 is success)
The output above is written to a file in $COLLATION_HOME/bulk/loadidml.<threadNumber>.out
Stopping the server
If you wish to stop the processing of DLA books (perhaps before stopping the TADDM server), you can execute the following command:
$ rm $BOOKREPO/keepRunning
Each thread will check for the existance of that marker file before looking for new books to process. If it doesn't exist, the thread will exit.
Since the processBooks.sh threads run all the time, it is possible that a thread could pickup a DLA book which is incomplete (partially transmitted). If that happens, the bulkload of that file will fail. The logs should be checked to determine if the bulkload failed and the book should be manually re-loaded if that is the case.