System tuning

To tune Product Master, you must tune the JVM memory settings, horizontally and vertically scale the services, tune the scheduler, and tune the workflow.

JVM tuning

Memory settings for all the Product Master services are in the $TOP/bin/conf/service_mem_settings.ini directory. The default memory settings are not optimized.

Following are the best practices for JVM tuning:
  • Size your Java™ heap so that your application runs with a minimum heap usage of 40%, and a maximum heap usage of 70%.
  • Set –Xmx and –Xms parameters for scheduler, appsvr, and wfl engine to 1024 or 1536 m. Other services settings might remain initially at the default size of 64 m.
    • On 64-bit environments, increase memory settings if needed to a higher value than 1536 m until there is sufficient physical memory available so no memory swap occurs.
    • On 32-bit environments, the –Xmx setting should not be increased higher than 1536 m as this increases the risk of running out of native Java memory.
  • For optimal settings, memory usage needs to be monitored and adapted. For more information, see
    Note: You can add the -verbose:gc to the $TOP/bin/conf/service_mem_settings.ini file. By default, the verbose recording occurs in the svc.out file. You specify a different file by adding the following:
    -Xverbosegclog:<Unassignedfilepath and filename>
  • To track JVM service memory usage spontaneously, memory usage snapshots can be collected as well by running the following:
    $JAVA_RT com.ibm.ccd.common.wpcsupport.util.SupportUtil --cmd=getRunTimeMemDetails
    However, for continued monitoring, use of verbose gc is the recommended method.

Your memory settings for the appsvr service are set in the application object and can be changed in the System Status in the System Administrator module by re-creating the application object.

Scheduler tuning

To tune the scheduler you set memory flags for the size of the largest job and instantiate enough schedulers that are based on the number of processors in the application server workstation.

The number of schedulers to set up is determined by the number of processors in the scheduler server at a 1:1 ratio. This ratio includes hyper-threaded processors, but the ratio can be increased slightly for dual-core processors to 2:1. You should test this ration to measure its effect on performance gains.

Each scheduler can run multiple worker threads. Each worker thread can run multiple jobs, and 8 is the default number of worker threads. The number of threads is specified by the num_threads parameter in $TOP/etc/default/common.properties file. In environments with large numbers of jobs, this number can be increased to 10 or even 20, but increasing the number of schedulers, scheduler servers, or both is more useful.

Large jobs can benefit more from configurations with multiple schedulers where each scheduler runs a single thread. A single thread per scheduler increases the amount of memory that each scheduler has per job.

Tip: If possible, do not run the scheduler on a system that also runs the appserver service.

Workflow tuning

Increasing the memory by setting the –Xmx parameter to 1536 m is the only tunable aspect of the workflow engine service.

Horizontal and vertical scaling

For information about horizontal and vertical scaling, which involves implementing product services across multiple application servers or multiple services on the same server to improve performance, see Configure a cluster environment

Performance tuning

Important: You need sudo user access to set the following parameters.
  • Ulimit parameters

    Add the following lines in the limits.conf file at the /etc/security/ folder to improve performance:

    
    *                    soft    nofile          100000
    *                    hard    nofile          100000
    *                    soft    nproc           100000
    *                    hard    nproc           100000
    *                    soft    core            unlimited
    *                    hard    core            unlimited
    <db instance owner>  soft    nproc           100000
    <db instance owner>  hard    nproc           100000
    Note: Depending upon the scenario, increase these values in case of any performance issue.
  • TCP tuning

    Open a command-line window, and run the following commands:

    sysctl -w net.ipv4.tcp_tw_recycle=1
    sysctl -w net.ipv4.tcp_tw_reuse=1
    To verify, run the following command:
    sysctl -a | egrep "reuse|recycle"
    The value of the following keys should now be 1:
    net.ipv4.tcp_tw_recycle = 1
    net.ipv4.tcp_tw_reuse = 1