Tuning for S&P when data amounts are large
sandrajones 270000NWW2 Visits (7201)
Tuning for S&P when data amounts are large.
If the index is not updated or current this can cause problems for the S&P agent, as a lot of I/O must be done to find the datapages on disk.
This was causing the a number of issues, with a single S&P run taking a number of days and still not finishing.
One issue seen was that there was nothing written to the *sy_java log for a considerable time, leading to the thought that the S&P agent had hung.
However it was still running, just not writing to the files, it was in fact running the deletes on the table.
In this case some of the larger tables were manually cleared of data, to allow the S&P agent to get back up to date.
As well as this some fine tuning was done.
All the values below are set in the sy.ini file and the agent stopped and started to allow the changes to take effect.
The values of KSY_
However it should be noted that the number of rows dealt with at one time by each thread is (batch factor) :
In this case since KSY_
In this case KSY_
However there is a caveat to these tuning levels, as the threads can run out of memory if these values are set to high.
The java memory can be increased with the KSZ_JAVA_ARGS and in this case the value was set to:
There may also be issues with the database if levels are set too high, check your RDBMS logs and RDBMS Administrator for any errors after the changes have been made.
Subscribe and follow us for all the latest information directly on your social feeds: