Dynamic cache provider overview
The WebSphere® Application Server provides a dynamic cache service that is available to deployed Java™ EE applications. This service is used to cache data such as output from servlet, JSP, or commands, and object data programmatically specified within an enterprise application with the DistributedMap APIs. .
Initially, the only service provider for the dynamic cache service was the default dynamic cache engine that is built into WebSphere Application Server. You can also specify WebSphere eXtreme Scale to be the cache provider for any cache instance. By setting up this capability, you can enable applications that use the dynamic cache service, to use the features and performance capabilities of WebSphere eXtreme Scale.
You can install and configure the dynamic cache provider as described in Configuring the default dynamic cache instance (baseCache).
Deciding how to use WebSphere eXtreme Scale
The available features in WebSphere eXtreme Scale significantly increase the distributed capabilities of the dynamic cache service beyond what is offered by the default dynamic cache provider and data replication service. With eXtreme Scale, you can create caches that are truly distributed between multiple servers, rather than just replicated and synchronized between the servers. Also, eXtreme Scale caches are transactional and highly available, ensuring that each server sees the same contents for the dynamic cache service. WebSphere eXtreme Scale offers a higher quality of service for cache replication provided via DRS.
However, these advantages do not mean that the eXtreme Scale dynamic cache provider is the right choice for every application. Use the decision trees and feature comparison matrix below to determine what technology fits your application best.
Decision tree for migrating existing dynamic cache applications
Decision tree for choosing a cache provider for new applications
Feature comparison
Cache features | Default provider | eXtreme Scale provider | eXtreme Scale API |
---|---|---|---|
Local, in-memory caching |
Yes |
via Near-cache capability |
via Near-cache capability |
Distributed caching |
via DRS |
Yes |
Yes |
Linearly scalable |
No |
Yes |
Yes |
Reliable replication (synchronous) |
No |
Yes |
Yes |
Disk overflow |
Yes |
N/A |
N/A |
Eviction |
LRU/TTL/heap-based |
LRU/TTL (per partition) |
LRU/TTL (per partition) |
Invalidation |
Yes |
Yes |
Yes |
Relationships |
Dependency / template ID relationships |
Yes |
No (other relationships are possible) |
Non-key lookups |
No |
No |
via Query and index |
Back-end integration |
No |
No |
via Loaders |
Transactional |
No |
Yes |
Yes |
Key-based storage |
Yes |
Yes |
Yes |
Events and listeners |
Yes |
No |
Yes |
WebSphere Application Server integration |
Single cell only |
Multiple cell |
Cell independent |
Java Standard Edition support |
No |
Yes |
Yes |
Monitoring and statistics |
Yes |
Yes |
Yes |
Security |
Yes |
Yes |
Yes |
For a more detailed description on how eXtreme Scale distributed caches work, see Planning the topology.
Topology
A dynamic cache service that is created with eXtreme Scale as the provider can be deployed in a remote topology.
Remote topology
The remote topology eliminates the need for a disk cache. All of the cache data is stored outside of WebSphere Application Server processes. WebSphere eXtreme Scale supports standalone container processes for cache data. These container processes have a lower overhead than a WebSphere Application Server process and are also not limited to using a particular Java Virtual Machine (JVM). For example, the data for a dynamic cache service being accessed by a 32-bit WebSphere Application Server process could be located in an eXtreme Scale container process running on a 64-bit JVM. This allows users to use the increased memory capacity of 64-bit processes for caching, without incurring the additional overhead of 64-bit for application server processes. The remote topology is shown in the following image:
Dynamic cache engine and eXtreme Scale functional differences
Users should not notice a functional difference between the two caches except that the WebSphere eXtreme Scale backed caches do not support disk offload or statistics and operations related to the size of the cache in memory.
No appreciable difference exists in the results returned by most dynamic cache API calls, regardless of whether you are using the default dynamic cache provider or the eXtreme Scale cache provider. For some operations, you cannot emulate the behavior of the dynamic cache engine with eXtreme Scale.
Dynamic cache statistics
You can retrieve statistical data for a WebSphere eXtreme Scale dynamic cache witheXtreme Scale monitoring tooling. For more information, see Monitoring.
MBean calls
The WebSphere eXtreme Scale dynamic cache provider does not support disk caching. Any MBean calls relating to disk caching do not work.
Dynamic cache replication policy mapping
The eXtreme Scale dynamic cache provider's remote topology supports a replication policy that most closely matches the SHARED_PULL and SHARED_PUSH_PULL policy (using the terminology used by the default WebSphere Application Server dynamic cache provider). In an eXtreme Scale dynamic cache, the distributed state of the cache is consistent between all the servers.
Global index invalidation
You can use a global index to improve invalidation efficiency in large partitioned environments; for example, more than 40 partitions. Without the global index feature, the dynamic cache template and dependency invalidation processing must send remote agent requests to all partitions, which results in slower performance. When you configure a global index, invalidation agents are sent only to applicable partitions that contain cache entries that are related to the Template or Dependency ID. The potential performance improvement is greater in environments with large numbers of partitions configured. You can configure a global index with the Dependency ID and Template ID indexes, which are available in the example dynamic cache objectGrid descriptor XML files. For more information, see Configuring an enterprise data grid in a stand-alone environment for dynamic caching.Near cache
You can configure a dynamic cache instance to create and maintain a near cache, which resides locally within the application server JVM. The near cache contains a subset of the entries that are contained within the remote dynamic cache instance. You can configure a near cache instance with a dynacache-nearCache-ObjectGrid.xml file. For more information, see Configuring an enterprise data grid in a stand-alone environment for dynamic caching. There are also custom properties for tuning the near-cache. For more information, see Dynamic cache custom properties.
Multi-master replication
A dynamic cache instance can be configured to support a multi-master replication topology. For more information, see Design considerations for multi-master replication. By default, the dynamic cache grid configuration is configured to use an internal collision arbiter. The arbiter is invoked to resolve collisions during replication. It first resolves collisions that result from remove and invalidation events, applying these actions over any other event. For all other events, the changes from the lexically lowest named catalog service domain will be applied. For more information, see Planning multiple data center topologies.
Additional information
- Dynamic cache Redbook
- Dynamic cache documentation
- DRS documentation