This blog promotes knowledge sharing through experience and collaboration. For more product information, visit our WebSphere Commerce CSE page. For easier navigation, utilize the Categories to find posts that match your interest.
PMT : What should be cached in WebSphere Commerce
Caching data is one of the best tool used in WebSphere Commerce to provide great performance. By leveraging a request result that was already computed, the system is able to save a lot of time and CPU. However, caching strategies can be difficult to implement as they require a method to determine what is cacheable, what the cache key will be, what the dependency identifiers will be to invalidate the cache entry and a mechanism to trigger the right cache invalidation.
To determine what to cache or not, we usually rely on the experience of developers who will try to guess which parts of the application will be best to cache.
In this article, I will show how you can use the performance monitoring tool built in WebSphere Commerce to determine which operation cache would bring the most benefits.
Capturing execution metrics
In my previous article. I showed how the performance monitoring tool can be used to capture execution metrics. I will now show how to read those metrics to determine the cache potential.
1 - Pick an environment to instrument
The metrics we will capture should represent the request load and distribution experienced in a production environment.
For instance, pick a node in your production or performance test environment to act as a metric gatherer or use a test system on which you can replay a day in the life of your production environment.
2 - Enable the performance logger
This is done by enabling the following logger in your Websphere Administration console:
You must also ensure that your trace.log file will be allowed to grow to a large size. It is suggested to allocate at least 1 GB of file size to capture a large number of metrics over a long period of time.
3 - Let the system run to gather metrics
We suggested to run a performance test for at least 15 minutes before gathering the trace log files. The longer the test runs, the more accurate the results will be. However, more metrics will also take longer to generate the performance reports.
4 - Generate the performance reports
On a toolkit, you can run the command
Note: you can customize the report generation phase by modifying the configuration file located in
The property file includes the documentation of every parameter it uses in commented out text.
Ensure that the property file references all the trace log files that were generated during your test.
For instance, the following configuration is used on runtime server to load two files:
logFileToLoadList= /opt/WebSphere/AppServer/profiles/demo/logs/server1/trace.log | /opt/WebSphere/AppServer/profiles/demo_solr/logs/solrServer/trace.log
Note: If you wish to generate reports on a "runtime" system that doesn't have a graphical user interface, you will need to applyAPAR JR52262or use Fix Pack 10+.
Reading the performance reports
To get cache potential metrics, you will need to switch the operation performance report to use the Cache Potential Layout.
The report will then contain relevant metrics to caching potential. Note that most of the cache potential metrics imply that the system cache would be "perfect". This means a cache where the data never gets invalidated, where a cache hit results in an instant response time and a cache size that is big enough to contain all the entries that are required by the cache.
While those metrics will not match to reality, they still give us information that allows us to evaluate the relative benefits that a cache could bring between different operations.
It is also necessary to manually look at the reports to manually filter irrelevant operations. For instance, the metrics might show that an operation would result in a high cache hit ratio, but if that operation is used to calculate data that always changes, it would not be a valid candidate for caching.
Some of the most interesting metrics include:
Maximum time saved by cache
This metric sums the amount of time that would have been saved on the system if there was a perfect cache applied to the current operation. The highest value usually indicates which operation is the best candidate for caching.
Maximum Theorical Cache hit percentage
This metric indicates how many of the calls to a specific operation would result in a cache hit in theory.
Maximum cache size
This indicates how much space an operation would take in the cache. Note that this metric is only valid if the result size were captured. This requires that the performance logger be enabled at the FINER level for some application layers.
Cache effectiveness vs theory
This metric is useful if you already have a cache enabled for an operation and you would like to know how well it performs. Note that not all layers are capable of identifying cache hit or miss. As of this writing, this metric was relevant only for Servlet level caching. A value of 100% indicates that the cache is performing at peak potential efficiency. A value greater than 100% might indicate that a cache is overused based on the parameters in use. A value lower than 100% indicates that the cached data wasn't reused as much as it could have been. This might be caused by a cache timeout, a cache eviction or a cache invalidation trigger.
Implementing a cache
Once you figured out which operation is a prime candidate for caching, it is now up to you to determine how to implement it based on the tools and techniques at your disposal.