Cache management

Keeping frequent accessed objects in cache is a primary technique to improvement performance. Product Master has built-in cache mechanism for some Product Master objects, and at solution level, some data can be cached to achieve reuse for performance purposes.

The use of a cache must balance the number of objects against memory consumption. If the cache size is too small, it could lead to frequent object replacement and cache miss; if the cache size is too large, it could lead to unnecessary memory consumption since the size limit might not take effect to flush some not used cache objects.

Check the following parameters, which are used to tune performance and resource utilization. Review the following list and make appropriate changes for your environment. The parameters are listed in priority.
Table 1. Performance parameters

Check the following parameters, which are used to tune performance and resource utilization.

Parameter Defined in file name
max_specs_in_cache mdm-cache-config.properties
max_lookups_in_cache mdm-cache-config.properties
max_ctgviews_in_cache mdm-cache-config.properties
max_roles_in_cache mdm-cache-config.properties
max_accesses_in_cache mdm-cache-config.properties

Example

  1. If one entry has a view with 1000 secondary specs that are associated with it (as attribute collection used by the view), it would try to load or access the specs during entry build. If the spec cache is set to 500, the cache hit could be zero, as it suddenly loads the 1000 specs in a sequence, later ones do not get cache hit but push out the old ones. In such a case, it could waste the processing time and memory at the same time. In this scenario, it would be better to set the cache to 1000, so it can get good cache hits and avoid the cache pushing-out.
  2. If one implementation has 1000 lookup tables for various purposes, but they are not likely to be utilized in a single user scenario. It may not be necessary to set the cache size to 1000, meaning we do not need to cache all Lookup Table all the time. With cache size limit control, some LRU cache can be flushed to free up memory.
The effectiveness of cache is about how frequently objects are being accessed, the key is the “cache hit”. If not properly configured, cache setup could have a negative effect on the performance (when you get mostly “cache miss”).

Product Master object caches are configured based on object type, with maximal object count and cache timeout value to put a limit on the memory consumption. When the object count reaches cache size limit, some cached objects are flushed out based on LRU algorithm. When a cache object reaches its timeout value of inactivity, that object is claimed. This ensures the memory to be freed up promptly according to configuration.