This blog promotes knowledge sharing through experience and collaboration. For more product information, visit our WebSphere Commerce CSE page. For easier navigation, utilize the Categories to find posts that match your interest.
Performance Considerations for Stagingprop Consolidation
Stagingprop has two stages for propagating data from a staging to a production server: consolidation and propagation. In this blog we'll cover some important aspects for tuning stagingprop consolidation performance.
During the consolidation stage, the utility examines the STAGLOG table for unprocessed records, and it marks as processed those that don't need to be propagated. For instance, if you have and INSERT, followed by a DELETE, followed by and INSERT, stagingprop will eliminate the first two records and just propagate the last INSERT.
Depending on the size of the STAGLOG table and the number of unprocessed records, the consolidation process may experience slow performance.
How long does it take to consolidate?
You can easily find out the time it take to consolidate all unprocessed records in the STAGLOG table. Just open the stagingprop log file and search for "Consolidation phase will commence" and "The command completed the consolidation job successfully". Each line in the stagingprop log has a timestamp. The consolidation duration will be the delta of these two.
20170502-135301| Consolidation phase will commence.
The above indicates consolidation took 11 second to complete (13:53:12 - 13:53:01).
Consider the following recommendations to improve the consolidation process.
As part of your WebSphere Commerce database maintenance, you have to run dbclean in a regular basis. Run dbclean against the STAGLOG table to purge all processed records to reduce its size. This will, consequently, improve the performance during consolidation and propagation.
Refer to the following blogs for more information:
Make sure you've installed JR55941. This APAR add an index to the STAGLOG table to improve performance.
Alternatively, you can apply the local fix of creating the index manually.
CREATE INDEX i0001538
When large number of records are unprocessed, and stagingprop abends because of java.lang.OutOfMemoryError or full database transaction log errors. Make sure to:
- Tune the runtime JVM heap from the stagingprop.sh script.
Locate the following line of code:
- Increase the log space for the database transactions
If you're still experiencing problems and you exhausted the above options, then proceed using the consolidationSize. The parameter is appended to the staginprop script at runtime:
./stagingprop.sh -scope _all_ -sourcedb stgdb -sourcedb_user db2inst1 -sourcedb_passwd pwd01 -consolidationSize 100 -destdb prddb -destdb_user db2inst1 -destdb_passwd pwd01 –actionOnError 2
The default behavior is to retrieve all unprocessed records at once. Again, if the number of unprocessed records is large, you may run into problems during the consolidation stage. You should then consider consolidating one table at a time (if possible). A good rule of thumb to use for consolidationSize is to limit the number of fetches to perform for each to-be-consolidated table to one. The following SQL statement can be used to retrieve the number of unprocessed records that exist in the STAGLOG table for each staged table:
The result set will show in a descending order the number of unprocessed records for each table. Consider running stagingprop using the largest unprocessed number of records per table as the consolidationSize. For example, for the following output use 2,700,000 as the consolidationSize.