It takes a lot to tune an enterprise system to perform optimally. But there are low-hanging fruits that often give you the most boost to your MDM applications.
I list several here.
1. Ensure MDM application server’s log4J.properties file is configured to log only the essential messages. Normally we recommend the ERROR level of logging for best performance. INFO, though useful, may be too much for a production system. DEBUG sometimes maybe carried over from a staging environment that no one remembered to set it back to a lower level.
Symptoms: lots of IO activities on the application server; transactions slower than what you have seen before
2. Ensure WebSphere configurations are correctly set (typical values in bracket)
JVM heap size (at least ms512MB/mx1024MB)
GC policy (gencon)
EJB cache (3500)
Prepared statement cache for each data source (400)
WebSphereDefaultIsolationLevel for each data source (set to 2, i.e., TRANSACTION_READ_COMITTED or Cursor Stability)
Symptoms: DBA tells you that queries are very fast, but transactions are still slow; cannot process more than certain number of threads; high garbage collection overhead (CPU usage peaks but nothing gets done); overall response times are high
3. JDBC data source connection pool: set the max to at least the number of client threads. For example, if 20 concurrent BatchProcessor threads are performing the initial load, set this value to 20
Symptoms: Clients getting "JDBC connection timeout" errors and/or transactions rollbacks
4. Application servers > <your app server name> > ORB service. This is a pool of ORB connections that handle client threads.Make sure timeout value is set to 180 secondsSymptoms: Clients getting "ORB connection timeout" errors
Connection cache max set to the concurrent client threads or larger
5. In using Event Manager, set MQ MaxUncommittedMsgsqueue value to the same or larger number than EventDetectionMaximumTasksInQueueOverride
Symptoms: Messages are being dropped by MQ, e.g., 1000 messages sent only 980 processed
6. The two Event Manager settings EventDetectionMaximumTasksInQueueOverride and EventDetectionJobDefaultCycle must work together. Larger values are not necessarily better. For example, if your current setting of processing 500 messages in 100 seconds (EventDetectionJobDefaultCycle=100000) give you exactly 5 TPS. For higher TPS, try tune the EventDetectionJobDefaultCycle first, for example, 10 TPS would require 500 messages to finish in 50 seconds, or EventDetectionJobDefaultCycle=50000. Try that, and you should still time out because bottleneck is somewhere else.
Symptoms: Hardware resource available, but cannot achieve higher throughput
Remember to send your interested topics to me.