Tips and techniques for WebSphere eXtreme Scale DynaCache in WebSphere Commerce environments, Part 2: WebSphere eXtreme Scale grid sizing and configuration

This second installment of the series focuses on the grid sizing best practices for the integration of IBM® WebSphere® eXtreme Scale DynaCache in WebSphere Commerce Server environments. WebSphere eXtreme Scale is a distributed caching solution that is a popular provider of DynaCache in large WebSphere Commerce Server environments. WebSphere Commerce Server customers have successfully integrated WebSphere eXtreme Scale DynaCache in large and small production environments. While the configuration of WebSphere eXtreme Scale DynaCache in the WebSphere Commerce Server environment is simple, you need to pay special attention to the best practices for design, usage, operational patterns, and tuning.

Share:

Dr. Debasish Banerjee (debasish@us.ibm.com), WebSphere Consultant, IBM

Photo of Dr. Debasish BanerjeeDr. Debasish Banerjee is presently a WebSphere consultant in IBM Software Services. He started his WebSphere career as the WebSphere internationalization architect. Extreme transaction processing, distributed cache, elastic computing, and cloud computing are his current areas of interest. Debasish received his Ph. D. in the field of combinator-based Functional Programming languages.



Ravi Tripathi (ravi.tripathi@us.ibm.com), Managing Consultant, IBM

Ravi Tripathi is an IBM Managing Consultant working on the Smarter Commerce platform in IBM Software Services for Industry Solutions (ISS-IS). In this role, Ravi advises large retail corporations about Smarter Commerce implementation architecture, design, infrastructure, performance, and launch. He is an expert in designing and developing omni-channel, high-performing Smarter Commerce solutions for large retailers in North America. Ravi received his Master's degree in Production Engineering.



Jim Krueger (jim_krueger@us.ibm.com), Advisory Software Engineer, IBM

Jim Krueger is an IBM Advisory Software Engineer working on the WebSphere eXtreme Scale development team. He is the lead WebSphere eXtreme Scale Dynamic Cache developer. In this role, he frequently advises large corporations about WebSphere eXtreme Scale technology. Before joining WebSphere eXtreme Scale, Jim was a member of the WebSphere Application Server EJBContainer development team.



Anupam Basu (anupam@us.ibm.com), IT Architect, IBM

Anupam Basu is an certified IT architect with IBM Software Group and helps customers implement e-commerce solutions with IBM Smarter Commerce and the IBM Middleware portfolio of products. He has extensive experience in designing and developing enterprise architectures. He helped build several high-volume and high-performing Smarter Commerce solutions for large retailers in North America. He earned double Master's degree in Statistics and Computer Science from Indian Statistical Institute, Calcutta, India.



18 January 2013

Also available in Chinese

Introduction to WebSphere eXtreme Scale grid sizing

When designing any WebSphere eXtreme Scale grid, you have to determine the amount of heap space, number of WebSphere eXtreme Scale servers, amount of physical RAM, number of physical machines, and a few other parameters. The DynaCache and general WebSphere eXtreme Scale grid sizing technique is described in the "Dynamic cache capacity planning" topic on the WebSphere eXtreme Scale information center, see Resources. For more information about WebSphere eXtreme Scale, refer to the "WebSphere eXtreme Scale Best Practices for Operation and Management" article in IBM Redbooks, see Resources.

This article outlines the strategies and techniques that are involved in sizing the WebSphere eXtreme Scale DynaCache grid. The sizing strategy is based on the existing grid-sizing techniques that are modified, enhanced, and fine-tuned based on our success at handling large WebSphere Commerce Server environments with the WebSphere eXtreme Scale DynaCache grid.


Sizing a WebSphere eXtreme Scale grid

The WebSphere eXtreme Scale DynaCache grid sizing is an iterative process that involves the following steps:

  1. Determine an initial estimate for a reasonable WebSphere eXtreme Scale grid configuration.
  2. Set up a performance test environment using representative data and traffic.
  3. Monitor the performance test environment for the number of objects that are cached, the total space that is consumed, the cache eviction rate, the CPU utilization, the heap utilization, the garbage collection, and any other relevant environment variables.
  4. Analyze the monitored data and determine an initial sizing estimate that more accurately reflects the needs of the environment.

This process requires a number of iterations, with WebSphere eXtreme Scale DynaCache grid reconfiguration and grid redeployment as the load tests progress. These additional iterations provide more accurate insights into the host environment, the grid usage, the traffic patterns, and so on. You use the same iterative process in a live e-commerce environment to determine the real access pattern of the users.

To determine a memory estimate for WebSphere eXtreme Scale DynaCache grid, you estimate the total size of the cached objects. If you are already running e-commerce using traditional DynaCache, it is a simple task for you to use the Cache Monitor or any other appropriate tool. With the tool, you determine the amount of data you are caching in memory and the overflow disks. For more information about the Cache Monitor tool, see Resources. For new DynaCache environments, this determination takes more time and is more iterative. WebSphere Commerce application architects and developers arrive at an initial estimate of the amount of cache by estimating the number of various categories of objects (HTML fragments, JSPs, commands, and so on), and their average size. These initial estimates are refined later during application load tests and integration.

In virtualized environments, WebSphere eXtreme Scale grids are spread across a number of machines or operating system images. Each machine typically contains more than one WebSphere eXtreme Scale server. You first determine how many WebSphere eXtreme Scale servers or JVMs in general can be safely hosted in a machine. Only performance tests with realistic data access patterns can provide definitive answers to the grid-sizing question. The following empirical guidelines are used to determine an initial number of WebSphere eXtreme Scale servers to host on a machine.


Determining the number of JVMs in a machine

The physical RAM available on a machine determines the number of JVMs that can be safely supported on a machine. You must not allow an insufficient amount of physical RAM to cause the paging of JVMs in the WebSphere eXtreme Scale grid.

  • For 32-bit operating systems, with a JVM of JGB of user heap space, you estimate at least (J + 0.4)GB of physical RAM to prevent paging.
  • For 64-bit systems, add at least an extra 1GB of physical RAM to the user heap size of a JVM.

For example, in a 64-bit system to safely run five JVMs each with 4GB of user heap, you allocate at least 5 * (4 + 1)GB = 25GB of physical RAM.

Depending on your CPU, you can estimate that 20% of the available CPU is consumed by the operating system. However, in a virtualized environment the system CPU consumed is estimated to be 25%.

  • For 32-bit operating systems, you allocate 1 core to each JVM.
  • For 64-bit operating systems, you allocate 0.75 core to each JVM.

For example, in an 8-core 64-bit system, you allocate 2 cores (25% of total 8 core) to the operating system. This leaves 6 cores for running JVMs. If you consider you have 0.75 core per JVM, you can potentially host (6 / 0.75) = 8 JVMs in the machine.

NOTE: To prevent paging and CPU overloading, use a smaller number of JVMs in an operating system that is based on the amount of physical RAM and the CPU power.

The formulas that are described in these examples can be used to estimate the number of WebSphere eXtreme Scale servers for both stand-alone environments and environments that are embedded inside WebSphere Application Server JVMs. Remember, this is just an initial estimate. During an actual load test, you will adjust the number of JVMs on your physical machines.

Example of an initial estimate for grid deployment

Assume that the total amount of data that is cached is CGB. By default, the data is stored in a compressed form in the WebSphere eXtreme Scale DynaCache grid. And most of the data that is cached in the WebSphere Commerce Server environment is HTML pages and JSP fragments. The standard compression algorithm that is used by WebSphere eXtreme Scale software yields a good compression ratio for both HTML pages and JSP fragments. In this example, use a compression ratio of 0.5 for the grid space and the space requirement becomes:

Grid space requirement = 0.5*C GB

The WebSphere eXtreme Scale middleware adds overhead for efficiently storing and managing cached data. In this example, use a 15% overhead for data storage and the space requirement becomes:

Grid total space requirement = 1.15*0.5*C GB

If the grid is configured with n(n >= 0) replicas, the space requirement becomes:

Grid total space requirement with replicas = 1.15*(Primary + Number of replicas)*0.5*C GB = 1.15*(1+n)*0.5*C GB

The result is the amount of data that is stored in the heap space of container servers.

To handle container failovers and for JVM efficiency, determine the grid size with 50% of the usable heap space of the JVMs. With 50% heap usage, the space requirement becomes:

Grid total space requirement with failovers = 2*1.15*(1+n)*0.5*C = 1.15*(1+n)*C GB

To determine the J number of container JVMs, assume that each container server JVM is configured with HGB amount of usable heap space. The total number of container JVMs needed to provide the required grid space is rounded to the next integer and becomes:

Grid total number of container JVMs: J = (1.15*(1+n)*C) / H

To determine uniform data distribution of CPU load and network traffic across all the machines, place the WebSphere eXtreme Scale DynaCache containers in machines of identical capacity, with each hosting the same number of WebSphere eXtreme Scale container servers.

Assuming the availability of M number of machines to host these WebSphere eXtreme Scale containers, use (J/M) number of containers per machine. Since each machine is configured with an identical number of containers, you then round the value of (J/M) to the next multiple of M, if the original division result is not an integer.

Grid total number of containers per machine = (J/M)

Grid configuration and deployment files

A WebSphere eXtreme Scale grid is configured and deployed with XML files. For DynaCache grid WebSphere eXtreme Scale up to V8.5, the two configuration files are provided in the [WXS-Install-Location]/ObjectGrid/DynaCache/etc directory:

  • dynacache-remote-objectgrid.xml
  • dynacache-remote-deployment.xml

The grid configuration file, dynacache-remote-objectgrid.xml, is used to alter the number of partitions in the grid. The default grid deployment file, dynacache-remote-deployment.xml, contains one asynchronous replica and 47 partitions. Alter the deployment file to suit your environment by using the tips in this article.

WebSphere Commerce applications use the DynaCache grid as a side cache if a <key, value> pair is present in the cache. Otherwise, it is fetched from the backend, which is created and stored in the cache. Do not use replicas in DynaCache environments for WebSphere Commerce because, if cached data is lost due to container failure, the data is rebuilt on the next GET request and stored back in the grid. The absence of replication lessens the CPU load on the container machines and the network traffic. The absence of replicas also means that all the shards in the grid are primaries. For a WebSphere eXtreme Scale DynaCache grid, five partitions per container are usually optimal.

Example of grid configuration without replicas

In this example, 4 machines are available to cache 40GB of data with a compression ratio of 0.5 and an overhead of 15%, the grid space requirement is:

1.15*0.5*40 1.15*0.5*C = 23GB

There are no replicas in the grid, so with 50% heap usage you need 46GB of heap space for the grid. With a gencon model of garbage collection and about 3.2GB of tenure heap space, the number of containers is:

46/ 3.2 = 15

For more information on garbage collection, refer to the "Java technology, IBM style: Garbage collection policies, Part 1" article.

To have an equal number of containers in each machine, the number 15 is rounded to the next multiple of 4, making the total number of containers 16 and resulting in 4 containers in each machine, the amount of heap space is:

3.2*16 = 51.2GB

With 5 partitions per container, you have a total of 80 partitions in the grid. For a uniform distribution of cached objects across all containers, you use a prime number for the numberOfPartitions grid deployment attribute. So, you round 80 to the next prime number and the number of partitions in the grid is:

83 partitions

You now have all the needed deployment attributes to enter in the grid deployment XML file. To keep the original files intact, copy the default DynaCache configuration and deployment files to another location and make the needed modifications. To use the modified files, start the WebSphere eXtreme Scale containers and provide the full paths to the configuration and deployment files in the startup scripts.

The default name of the DynaCache grid is "DynaCache_REMOTE". For ease of identification in log files, and also to make sure that the WebSphere Application Server clients do not accidentally get connected to any unintended DynaCache grid, you can use non-default descriptive names for DynaCache grids. For example, DynaCache_REMOTE_[Company_Name]_[Environment_Name], or any other convention to make the grid names unique.

If you choose to use a non-default name like DYNACACHE_REMOTE_MYCOMPANY_PROD, you need to update the following:

Update the dynacache-remote-objectgrid.xml file to reflect the non-default name by changing these lines:

  • From: <objectGrid name="DYNACACHE_REMOTE" txTimeout="30">
  • To: <objectGrid name="DYNACACHE_REMOTE_MYCOMPANY_PROD" txTimeout="30">

Update the dynacache-remote-deployment.xml file to reflect the non-default name by changing these lines:

  • From: <objectgridDeployment objectgridName="DYNACACHE_REMOTE">
  • To: <objectgridDeployment objectgridName="DYNACACHE_REMOTE_MYCOMPANY_PROD">

NOTE: When you use a non-default DynaCache grid name, you must also define the custom property com.ibm.websphere.xs.dynacache.grid_name in all the client WebSphere Application Server JVMs. This definition is in addition to the other JVM custom properties that need to be defined for the WebSphere Application Server clients to be able to connect to the WebSphere eXtreme Scale DynaCache grid. For more information, refer to the "Configuring the dynamic cache provider for WebSphere eXtreme Scale" topic on the WebSphere eXtreme Scale information center, see Resources. For more details, refer to the "Configure WebSphere Commerce to Use WebSphere eXtreme Scale for dynamic cache to improve performance and scale" developerWorks wiki page.

Update the dynacache-remote-deployment.xml file with the proper number of partitions:

  • From: numberOfPartitions="47"
  • To: numberOfPartitions="83"

To prevent unnecessary churn during grid startup, you update the number of initial containers attribute.

Update the dynacache-remote-deployment.xml file with the number of containers:

  • From: numInitialContainers="1"
  • To: numInitialContainers="16"

NOTE: You cannot use 101 as the numberOfPartitions grid deployment attribute. If you arrive at 101 as the value of this attribute with this sizing strategy, you need to use 103 instead.

In WebSphere eXtreme Scale (V7.1.1 or earlier), the use of 101 as numberOfPartitions results in a poor use of the shard space because only about 2/101th portion of each shard is used. The value of 101 also results in many cache evictions. The application continues to place objects in the DynaCache and the WebSphere eXtreme Scale software continues to evict a significant portion of them from the cache. The WebSphere eXtreme Scale V7.1.1 code is updated to address this issue. However, for better grid performance, do not use 101 as the value of the numberOfPartitions attribute even in WebSphere eXtreme Scale V7.1.1 or later.


Determining the number of cached entries

With a regular WebSphere eXtreme Scale cache, the memory and related sizing is determined solely by the amount of heap space. However, for WebSphere eXtreme Scale DynaCache grid, you also consider the total number of objects to be cached in the grid, and you set the grid deployment parameters accordingly. The number of entries that are cached in the WebSphere eXtreme Scale DynaCache depends on the numberOfPartitions grid-deployment parameter and the cache size attribute that is defined at the WebSphere Application Server client.

The cache size attribute is determined by the application server, and is accessed in the administration console for any WebSphere Application Server with a WebSphere eXtreme Scale DynaCache client installed.

  1. Expand the Container Services category, select Dynamic cache service, as shown in Figure 1.
    Figure 1. Admin Console selection of dynamic cache service
    Admin Console Container Services > Dynamic cache service link
  2. In the Dynamic cache service configuration window, select the drop-down menu for Cache provider and select WebSphere eXtreme Scale.
  3. Set the Cache size attribute to the correct value (default is 2000) and then select the Enable cache replication check box, as shown in Figure 2.
    Figure 2. Admin Console check box for Enable cache replication
    Admin Console checkbox for Enable cache replication

In the traditional DynaCache environment, the server-scoped cache size attribute signifies the number of cached entries that reside in the JVM heap of the selected application server. However, WebSphere eXtreme Scale DynaCache provides a coherent cache.

WebSphere eXtreme Scale DynaCache runtime invariant:

Total number of cached entries = Cache size * number of partitions

The WebSphere eXtreme Scale DynaCache runtime invariant is maintained by evicting cache entries from the grid, if necessary, following the least recently used (LRU) algorithm. Whenever the number of cached entries in a partition exceeds the cache size value, the WebSphere eXtreme Scale software starts LRU evictions to bring down the number of cached entries in that partition to cache size.

The proper setting of the cache size attribute is important in any WebSphere eXtreme Scale DynaCache environment. If the attribute is set too large, out-of-memory exceptions occur because it tries to accommodate many entries and exhausts the heap space. If the attribute is set too low, many cache LRU evictions occur, which has a negative effect on the application performance. Too many cache evictions also result in high CPU usage in machines hosting WebSphere eXtreme Scale DynaCache containers. You need to make sure that the grid sizing is accurate to avoid too many LRU evictions from cache.

In this example, if you accept the default value of the cache size attribute, you arrive at the total number of entries in the grid as 2000*83 = 166000.

Customers who are already running their e-commerce site with traditional DynaCache know the number of elements to be cached. However, new customers might start with the default cache size attribute in the formula 2000*numberOfPartitions expected cached entries. Use WebSphere eXtreme Scale grid-monitoring leads during load runs and live runs to make adjustments that more accurately predict the total number of entries to be cached.

In most cases, for large WebSphere Commerce Server installations, you increase the default 2000 to a higher value to store necessary cache instances in the grid. As evidenced by the WebSphere eXtreme Scale DynaCache runtime invariant formula, the capacity of the WebSphere eXtreme Scale DynaCache in terms of the total number of cached objects can be changed by altering the number of partitions, cache size, or both. The number of partitions attribute is based on the total number of container JVMs. And the total number of JVMs is based on the total heap space requirement for the grid. Any large increase in the number of partitions results in a higher CPU load on machines hosting WebSphere eXtreme Scale containers. Any change in the number of partitions also requires a complete WebSphere eXtreme Scale DynaCache grid restart with loss of data. It also requires adjustments in the number of WebSphere eXtreme Scale containers. However, a change in cache size requires the restart of a single application server, and can easily be accomplished without perceptible service loss.

Tips for setting number of entries to be cached

  • Set the cache size attribute to identical values across all the client WebSphere Commerce Server JVMs. Failure to set an identical cache size attribute creates unpredictable situations during the start of WebSphere Commerce Server cluster members.
  • Minimize cache LRU evictions by adjusting the cache size attribute according to the invariant formula.
  • Determine the value for the number of partitions attribute based on the memory sizing example.

Administering and configuring a WebSphere eXtreme Scale catalogs and containers

Administering WebSphere eXtreme Scale servers that are deployed in a WebSphere Commerce environment is relatively easy and straightforward. Catalog servers start along with the hosting WebSphere Application Server JVM. For WebSphere eXtreme Scale containers, you can create a dummy EAR (or WAR) file that contains only the two DynaCache configuration and deployment xml files (Objectgrid.xml and ObjectGridDeployment.xml) in its META-INF directory and deploy the file in a WebSphere Application Server cluster. The container JVMs start when that dummy application starts. Similarly, to bring down the containers, you can simply stop the dummy application without stopping any WebSphere cluster members. Stopping one or more cluster members stops the corresponding containers. Similarly, by stopping the host WebSphere Application Server JVM, the corresponding catalog server can be stopped. All of these operations can be performed from the WebSphere Application Server Network Deployment admin console or with wsadmin scripts.

In stand-alone environments, you can start the WebSphere eXtreme Scale servers by developing shell scripts, which invoke the WebSphere eXtreme Scale supplied the startOgServer command that is in the [WXS-Install-Location]/ObjectGrid/bin directory. To stop the servers, use the teardown command from xsadmin (for WebSphere eXtreme Scale V7.1 and earlier) or xscmd (from WebSphere eXtreme Scale V7.1.1 and later). For details on the command, refer to the xsadmin utility reference topic on the WebSphere eXtreme Scale information center, see Resources.

For operational simplicity and ease in troubleshooting and maintenance, use a suitable naming convention for WebSphere eXtreme Scale servers. Also keep the following files and others in non-default locations that are clearly separated from the IBM installed WebSphere eXtreme Scale file path:

  • configuration files
  • properties files
  • generated log files
  • verbose-gc output files
  • Java™ cores
  • Java heap dumps

For example, all the user-specific files can be in a path similar to: [WXS-Install-Location]/[company-name]/wxs/ directory. The name of the WebSphere eXtreme Scale servers and all the file paths can be specified as arguments to the startOgServer command and as JVM startup arguments.

You can also explicitly specify various ports when you start the WebSphere eXtreme Scale servers. Typically, more than one WebSphere eXtreme Scale container resides in a machine. Though you host more than one catalog-server cluster-member in the same machine, a machine can host more than one catalog server, each of which manages different WebSphere eXtreme Scale grids.

For operational simplicity, we suggest the following explicit specification of ports for WebSphere eXtreme Scale servers in the startup scripts, or in the server.properties file, leaving the remaining ports to be assigned automatically by the WebSphere eXtreme Scale run time. For more information on properties files, refer to the "Sample properties files" topic in the WebSphere eXtreme Scale information center, see Resources.

For WebSphere eXtreme Scale catalog server ports:

  • The listenerPort to which the Object Request Broker (ORB) binds. In WebSphere environments, the ORB port is specified as BOOTSTRAP_ADDRESS.
  • The JMXServicePort for MBean server is needed only for stand-alone installations.
  • The clientPort and peerPort in the catalogServiceEndPoints specification. Any unassigned ports can be used. You can use the 6601 and 6602 ports for the first catalog server hosted in the machine. If there are other catalog servers in the same machine for different grids, you can use a convention something like <6601 + (n-1), 6602 + (n-1)> when you specify this pair of ports where n >= 1 is the catalog server number. The catalog server numbers are arbitrary. This or similar port selection conventions avoid accidental port conflicts.

For WebSphere eXtreme Scale container ports:

  • The listenerPort for binding the ORB. Again BOOTSTRAP_ADDRESS is the ORB port in WebSphere environments.

Tips for port assignments

  • You must avoid the ephemeral port range when you assign explicit ports. The use of any port in the ephemeral range can lead to unpredictable behavior because of possible port conflicts with the underlying operating system. The ephemeral port range is operating system-dependent. Refer to the NcFTP site "The Ephemeral Port Range" documentation, see Resources.
  • You can use a formula for assigning the listener and JMX service ports to avoid port conflicts and to save room for adding more WebSphere eXtreme Scale servers in the same machine. For example, in the AIX® operating system, 30<WXS Container Number>10 can be a good strategy for assigning listener ports for WebSphere eXtreme Scale containers. Following this strategy of port allocation, you have <30110, 30120, 30130, …> as the listener ports of WebSphere eXtreme Scale containers hosted in the same machine.

Configuring a generic JVM for WebSphere eXtreme Scale servers

For the WebSphere eXtreme Scale servers, it is recommended that you use the generational garbage collection model. The suitability of the generational model can be verified later by analyzing verbose-gc outputs. For details, refer to the developerWorks article "Java technology, IBM style: Garbage collection policies, Part 1".

For 32-bit machines, you might use 1GB to 1.5GB of maximum heap space. The minimum heap can be initially set to half of the maximum heap setting. A 250MB nursery space can be used for 32-bit servers.

Typically, DynaCache grids in WebSphere Commerce installations contain a relatively large amount of data. In WebSphere Commerce environments, the use of 64-bit JVM is more popular because of the availability of larger heap space when compared to their 32-bit counterparts. For 64-bit container JVMs, do not exceed a maximum heap size of 5GB to keep the garbage collection pauses to acceptable levels. The following JVM settings can be used in configuring the JVMs for WebSphere eXtreme Scale servers for hosting DynaCache.

Tips for 64-bit JVM heap configurations

  • For WebSphere eXtreme Scale servers, use a generational garbage collection model.
  • For catalog server JVMs, a maximum heap space of 1GB is usually sufficient. The minimum heap for catalog servers can be set to 256MB with a nursery space of 64MB. The argument setting for catalog servers looks something like:
    -Xgcpolicy:gencon -Xms256m -Xmx1g -Xmn64m
  • For container JVMs, you can use a maximum heap space of 4GB. The minimum heap space of 2GB can be set at the beginning and tuned later, if necessary, after you analyze the data collected during load test and live runs. A nursery space of 800MB can be used. The use of compressed pointers is recommended. The argument setting for container servers looks like:
    -Xgcpolicy:gencon -Xms2g -Xmx4g -Xmn800m -Xcompressedrefs

Other suggested JVM settings for better performance

Most of the current commerce IT environments use IPv4 for internet addressing. In pure IPv4 environments, to prevent unnecessary conversions back and forth between IPv6 and IPv4 formats, you need to set the following in the container startup scripts:

For stand-alone WebSphere eXtreme Scale containers, set the JVM argument in the container scripts:

  • -Djava.net.preferIPv4Stack = true

For WebSphere eXtreme Scale containers that are hosted in WebSphere Application Servers, set the JVM custom property:

  • java.net.preferIPv4Stack = true

You can also reduce the number of local host DNS lookups by caching result of lookups for a reasonable amount of time, say 1 minute, by setting the following:

For stand-alone WebSphere eXtreme Scale containers:

  • Container startup scripts: -Dcom.ibm.cacheLocalHost = True
  • Container installation add the time-to-live value: networkaddress.cache.ttl = 60 in the [WXS-Home]/java/jre/lib/security/java.security file.

For WebSphere eXtreme Scale containers hosted in WebSphere Application Servers, set the following JVM custom properties:

  • com.ibm.cacheLocalHost = True
  • networkaddress.cache.ttl = 60

Collecting and monitoring statistics with the WebSphere eXtreme Scale console server

WebSphere eXtreme Scale provides catalog and container servers, but it also includes a third server, known as the WebSphere eXtreme Scale console server, which is used for statistics collection and monitoring. Console servers can be used to monitor various grid statistics. The console server is not essential for the operation of WebSphere eXtreme Scale grid and it is not installed by default. To install the console server, you need to either select it during GUI installation or specify a parameter when you install the console server component in silent mode. For details about the WebSphere eXtreme Scale console server, refer to the "Starting and logging on the web console" topic in the WebSphere eXtreme Scale information center, see Resources.

The console server can also be installed and started from the catalog server installation. There is no special setting needed for WebSphere eXtreme Scale console server JVM, and no reason to start the WebSphere eXtreme Scale console servers from each of the catalog server installations. Remember that only one instance of the WebSphere eXtreme Scale console server can be started. If the machine becomes unavailable, another instance of the console server can be started from another catalog server. A console server can be configured to monitor more than one WebSphere eXtreme Scale grid.

Tips for hosting WebSphere eXtreme Scale DynaCache servers

  • For maintaining isolation, machines that are used to host WebSphere eXtreme Scale servers should not be used to host other applications. If WebSphere eXtreme Scale is hosted in a WebSphere Application Server cluster, no other application should be deployed in that WebSphere Application Server cluster.
  • Each physical machine that hosts the WebSphere eXtreme Scale containers should be identical in memory capacity, CPU power, and network card capacity. When you determine the number of WebSphere eXtreme Scale servers to machines, you need to carefully consider the CPU power as well as the amount of physical memory available to each machine.
  • Catalog servers are hosted in machines that do not contain any WebSphere eXtreme Scale containers. If that is not possible, you use a separate installation for catalog servers. WebSphere eXtreme Scale console servers can be hosted in the catalog server installations, and you can start only one instance of the WebSphere eXtreme Scale console server from a catalog server installation. Each physical machine contains the same number of WebSphere eXtreme Scale containers.
  • You should have at least three machines to host the WebSphere eXtreme Scale DynaCache containers. The use of only two machines might overload a machine or create a single-point-of-failure (SPOF) situation when one machine goes down for scheduled maintenance or other reasons. All of the machines that host the WebSphere eXtreme Scale catalog and container servers for the same grid should reside in the same data center with a reliable high-speed network infrastructure.

Conclusion

This second installment provides ideas, insights, and best practices for successful grid sizing when you implement caching in WebSphere Commerce Server using the WebSphere eXtreme Scale DynaCache. Although the series focuses on WebSphere Commerce environments, most of the tips and techniques are applicable to WebSphere eXtreme Scale DynaCache in any environment, including in a pure WebSphere Application Server Network Deployment or in WebSphere Portal Server environments. As further best practices emerge and more utilities for monitoring are developed, the authors intend to provide updates for the usage of WebSphere eXtreme Scale DynaCache.


Acknowledgments

The authors are grateful to the following people for technical discussions course the work with WebSphere Commerce and WebSphere eXtreme Scale: Kyle Brown, IBM DE, ISSW; Brian Martin, STSM, WebSphere eXtreme Scale, and XC10 Lead Architect; Douglas Berg, WebSphere eXtreme Scale Architect; Chris Johnson, WebSphere eXtreme Scale Architect; Jared Anderson, WebSphere eXtreme Scale Architect; Rohit Kelapure, WebSphere Application Server Development; Joseph Mayo, XC10 Development; Surya Duggirala, WebSphere Application Server Performance Lead; Matt Kilner, JDK L3; Brian Thomson, STSM, WebSphere Commerce Server CTO; Misha Genkin, WebSphere Commerce Server Performance Architect; Robert Dunn, WebSphere Commerce Server Development; Kevin Yu,ISS-IS. The authors would like to acknowledge Mary A. Brooks for a superb job in copy editing. A very special thanks to Cheenar Banerjee for her assistance in proofreading and suggesting several readability improvements.

Resources

Learn

Get products and technologies

Discuss

  • Get involved in the My developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Commerce on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Commerce, WebSphere
ArticleID=855511
ArticleTitle=Tips and techniques for WebSphere eXtreme Scale DynaCache in WebSphere Commerce environments, Part 2: WebSphere eXtreme Scale grid sizing and configuration
publish-date=01182013