Tuning batchsize and transaction size parameters for StagingProp in WebSphere Commerce
SalmanAhmadKhan 2700010NXK Visits (9399)
IBM encourages clients to use a combination of batchsize and transaction size parameters for improved performance
By default, the batchsize value is set to 100. If the stagingProp utility fails, a batch size of 100 is not very useful in determining the record causing the failure because the error message may not point to the actual failing record because there are 100 records in the failing batch. If stagingProp fails, we recommend clients to change batchsize to zero. Setting -batchsize 0 turns off JDBC batching and might help you identify the exact records that are causing errors.
This diagram visualizes the relationship between the batchsize and transaction parameters.
Batchsize and Transaction Relationship (click image to enlarge)
For example, if transaction size is set to 100 and batchsize is 30, changes to production are committed every 4 batches.
Recommendations for batchsize and transaction size parameters
The recommended ratio for batchsize to transaction size is 1:10. To resolve general performance issues,consider using larger values for batchsize and transaction size. The optimal values for IBM internal testing were 100,000 and 1 million for the batchsize and transaction parameters respectively (based on internal testing). JVM heap may need to be tweaked if a large batchsize value is used.
If stagingprop is taking too long, consider using the -batchsize or -transaction parameters to process LESS data at once. If you are propagating too many rows each time, consider running stagingprop at a more frequent interval, which would require less data per propagation. However, keep in mind that the larger the batch size, the bigger the heap required to hold the SQL statements in memory before a commit is performed.