Each installment of Innovations within reach features new information and discussions on topics related to emerging technologies, from both developer and practitioner standpoints, plus behind-the-scenes looks at leading edge IBM® WebSphere® products.
IBM WebSphere eXtreme Scale is IBM’s strategic software-based elastic caching platform. WebSphere eXtreme Scale is a Java™-based in-memory data grid that dynamically processes, partitions, replicates, and manages application data and business logic across hundreds of servers. WebSphere eXtreme Scale provides the ultimate flexibility across a broad range of caching scenarios.
WebSphere eXtreme Scale is fully elastic. Servers can be added or removed from the data grid and the data grid will automatically be redistributed to make the best use of the available resource, while still providing continuous access to the data along with seamless fault tolerance. WebSphere eXtreme Scale provides proven multi-data center capabilities and easily integrates with other IBM application infrastructure products to provide a powerful, high-performance elastic caching solution for your enterprise.
The IBM WebSphere DataPower® XC10 Appliance provides the benefits of WebSphere eXtreme Scale in an easy to deploy appliance form factor. Each XC10 appliance provides 240 GB of cache capacity and a simplified management user interface for creating and monitoring data grids as well as for appliance health. Appliances can be grouped together into highly available collectives, providing failover capability, as well as increasing overall cache capacity and throughput.
IBM Web Content Manager is included with IBM WebSphere Portal to provide a robust web content management solution. It is used to create, manage, and deliver content for your web site. You can create content either by using the web content authoring portlet, or by creating your own customized authoring interface. Web content stored in external content management systems can also be referenced within a Web Content Manager system. You can deliver your web content using web content viewer portlets, the Web Content Manager servlet, or pre-render your site to HTML. The web content viewer portlet is the most widely used option to display Web Content Manager content. Web Content Manager basic cache is set by default and used for servlet rendering. Web Content Manager only supports advanced cache for rendering through a portlet. Thus, the web content viewer portlet can benefit most from improvement in the advanced cache.
Caching content directly in the application server JVM has one obvious benefit: access to the cached content is extremely fast because no network traversal is required. Unfortunately, it also has a number of limitations (Figure 1):
- The maximum size of the cache is limited to the available heap in the JVM. A large and very full JVM heap can lead to long garbage collection pauses, slowing application response time.
- In a cluster of JVMs, content will be cached redundantly in each JVM, making for inefficient use of available cache space.
- A newly started JVM must populate its cache, which means slow initial response times and high load on the back end content source while the cache is warmed.
Figure 1. Traditional dynacache topology
Offloading the cached content from the application server to a data grid resolves all of these issues (Figure 2):
- The total cache size can scale elastically via the capabilities offered by WebSphere eXtreme Scale while remaining completely independent of the application server JVM heap or application server hardware limitations.
- The cached content is not stored in the application server JVM and therefore does not contribute to garbage collection load.
- The cached content is shared between application servers, meaning that each item is only cached once and newly started servers have immediate access to items previously cached by other servers, rather than having to independently fetch a copy from the back end content source.
Figure 2. WebSphere eXtreme Scale dynacache topology
Both WebSphere eXtreme Scale and the XC10 appliance provide integrated support for offloading dynacache content to a data grid. All that is required is to install the WebSphere eXtreme Scale client into the WebSphere Application Server instance and configure the dynacache instance to be offloaded. There are numerous advantages to this configuration, including caches that can be much larger than the application server heap would otherwise support while not paying the penalty of storing on disk, caches that are shared between application server instances, and cached content that can survive an application server restart.
The remainder of this article describes a set of performance tests run by the authors to determine the benefits of using this mechanism to offload the Web Content Manager advanced cache dynacache instance to a data grid deployed on two XC10 appliances.
For the benchmark activity in this series of tests, we deployed a static horizontal WebSphere Portal cluster with two nodes and an IBM HTTP Server. We used the out-of-box wiki templates to populate six web content libraries totaling 50 GB of content in a database. Each wiki content item includes text and images. There were six WebSphere Portal pages, each of which had a web content viewer portlet that rendered a wiki from a Web content library. The out-of-box wiki portlet displayed a list of ten wiki content items, by title, at a time. Because each wiki has thousands of wiki items, there are over 1000 list pages that users can go through.
For the test, each user:
- Logs onto WebSphere Portal.
- Randomly navigates to one of the six portal pages that render wikis.
- Randomly goes to a list page and views every wiki item on the page by clicking the wiki content link.
- Repeats steps 2 and 3 ten times.
- Logs out.
There is ten-second "think time" between user actions. The two user transactions of interests are: going to a list page, and viewing a wiki item.
We tested three cache configurations with regard to Web Content Manager:
- In the first scenario, we used Web Content Manager basic cache, where the rendering portlet doesn't get cache benefits. This represents a situation where there's insufficient application server heap to utilize the advanced cache.
- In the second scenario, we turned on the advanced cache and tuned it to the maximum size based on user load and the 4GB JVM heap limit. This resulted in a 5000-entry cache in each WebSphere Portal JVM for Web Content Manager advanced cache.
- For the offloaded dynacache scenario, we used two XC10 caching appliances configured in a collective to store the cached content with a maximum of one million cache entries.
Figure 3. Topology diagram, including XC10s
In order to offload the Web Content Manager advanced cache, we configured the specific dynacache instance (the “processing” cache) to use the WebSphere eXtreme Scale dynacache provider rather than the default provider. We then pointed the configuration at the data grid hosted on the XC10 collective. Notice that the cache size indicated, 12000, results in a total cache size of approximately one million entries due to how WebSphere eXtreme Scale deploys the cache across data grid partitions (Figure 4).
Figure 4. Dynacache instance configuration
We used the benchmark to run two use cases:
- In the first case, we permitted the systems to warm up for two hours and then measured throughput and response time over the course of a four-hour, 300-user load run. This enabled us to observe the benefits of a large data grid cache when compared to no cache, or even a small local cache.
- In the second case, we simulated a cold start by bringing the servers up and then immediately applying load and measuring the throughput and response time during the first 30 minutes of load. This provided insight into the impact of bringing a new server online, or restarting a server due to maintenance.
In the steady state scenario, we observed a number of tangible benefits when using a large offloaded cache (Figure 5). The throughput (requests per second) increased 15% when advanced cache was enabled, and increased another 25% when Web Content Manager advance cache was offloaded to XC10. Compared with the Web Content Manager basic cache scenario, the throughput increased 44% by using XC10.
Figure 5. Steady state throughput comparison (higher is better)
The response time for jumping to a list page decreased 13% with advanced cache and 35% with XC10 (Figure 6). The response time for displaying wiki content decreased 20% with advanced cache and 78% with XC10 (Figure 7). The large response time improvement in displaying wiki content with XC10 is the benefit of the large capacity of the data grid. When using the default dynacache implementation, there were many cache discards due to the cache size being limited by JVM heap.
Figure 6. Steady state response time comparison for going to a list page (lower is better)
Figure 7. Steady state response time comparison for displaying wiki content (lower is better)
In addition, we observed 35% decrease in active database connections when XC10 was used. This is expected, as more content was served from the XC10 cache instead of being retrieved from the Web Content Manager database. We also saw a significant improvement in garbage collection load when offloading the cache. The amount of time spent performing garbage collection dropped by 71% when comparing the default cache configuration with the offloaded cache. This can be attributed to having fewer objects created and destroyed in the application server heap when the cache content is offloaded.
For the cold start scenario, we only tested the local advanced cache and the offloaded advanced cache. This is because a server with no caching enabled will perform approximately the same before and after a warm-up period because is no cache to warm. The purpose of this test was to demonstrate the value of having shared cache pre-populated and available for a newly started application server. This avoids the initial warm-up period during which a newly started server will have poor response times and trigger heavy load against the back end content source as it retrieves content to fill its cache. With the cache offloaded to a data grid, the cache can be maintained independently of any individual application server restart, or even a restart of the entire application server cluster. This means that bringing a server down for maintenance or adding a new server to handle a spike in load no longer requires carefully warming the server's cache after startup before bringing it back online for production use.
As with the steady state scenario, we observed significant benefits in throughput and response time when testing cold start performance. Throughput improved by 54% (Figure 8). Response time for jumping to a list page decreased by 26% (Figure 9) and response time for displaying a wiki page decreased by 49% (Figure 10).
Figure 8. Cold start throughput comparison (higher is better)
Figure 9. Cold start response time comparison for going to a list page (lower is better)
Figure 10. Cold start response time comparison for displaying wiki content (lower is better)
With a few simple configuration changes and the deployment of the WebSphere DataPower XC10 caching appliance, we were able to significantly improve the IBM Web Content Manager and WebSphere Portal end user experience. We measured a maximum cached content size of 9 GB in our XC10s. This represents a small fraction of the available capacity of 480 GB provided by a two-XC10 collective, but is vastly more than could be cached by an individual application server JVM using local heap space.
Although individual cache operations are inherently slightly slower due to the network access required, this cost is more than offset by being able to greatly increase the cache hit rate through having a larger cache, and by having a cache miss at one application server result in the cache being preloaded for the next application server that needs that item.
IBM WebSphere eXtreme Scale product information
IBM WebSphere eXtreme Scale Information Center
WebSphere Portal and IBM Web Content Manager Information Center
Web content cache configuration
Enhancing WebSphere Commerce performance with WebSphere eXtreme Scale
Configure WebSphere Commerce with WebSphere eXtreme Scale to improve performance, scale, and your competitive edge
Get products and technologies
Ben Parees is a Senior Software Engineer with 11 years of experience at IBM. He has been a developer on the WebSphere eXtreme Scale product for 2 years and has been involved with the XC10 product since its inception. As a developer his area of focus has been disk persistence technology. In addition he served as the product integration architect focusing on helping IBM products leverage WXS and XC10 caching solutions. Ben holds a Masters degree in Computer Science from North Carolina State University and completed his undergraduate degree in Computer Science at Pennsylvania State University.
Wanjun Wang is the WebSphere Portal System Verification Test architect and has been working on Portal/Web Content Management/Personalization and other middleware for years. He oversees the system test strategy for Portal/WCM releases, which includes test configurations, reliability runs and automation. He is also actively engaged in high-profile customer consulting for large deployment. Wanjun is based in IBM Research Triangle Park Lab, USA.
Rama Boggarapu is an Advisory Software Engineer with 9 years of experience at IBM. He recently joined WebSphere eXtreme Scale development team and currently working in Integration team. Prior to this, Rama worked as a Team Leader in WebSphere Application Server Level 2 support team and recognized as subject master expert in WebContainer, Session Management, Dynamic Cache and Classloader components. Rama holds a Bachelors degree in Electronics and Communications.