General installation instructions for clustered deployment
About this task
To install the Sterling Configurator Visual Modeler on a clustered deployment, perform the following general tasks:
Procedure
- Depending on the cluster architecture, install the Sterling Configurator Visual Modeler on each instance or into the Administrator server that deploys the Web application to the managed servers.
- If you are using 2-way encryption anywhere
in the implementation, then follow these steps:
- Make sure that you start one of the machines before the others.
- Perform a persist operation that requires the use of 2-way encryption.
- Identify the location of the
dcmsKey.ser
file on this machine and copy this file to the corresponding location on the other machines of the cluster.
- Perform the following steps to share directories:
- Select one of the machines as the “primary machine”. Allocate a directory on this machine to provide the shared location.
- Share this location so that all members
of the cluster have access to it:
- Windows: share this directory to the other machines
- UNIX: use NFS to share the directory
- On all machines, mount the file system
so that all cluster members have the same mount point to this directory.
For example:
/DEBS_shared
- Under DEBS_shared/, create a sub-directory
for each of the categories shown in the configuration file (loadable,
writable, and so on), for example:
/DEBS_shared/lw
and set that value in the configuration file, for example:
<loadable ...>/DEBS_shared/lw</loadable>
- As a site administrator, set the value of the useSessionCaching system property to “true”. This property is in the Profile Manager section of the system properties.
- Enable your Sterling Configurator Visual Modeler implementation
as a distributed implementation as follows:
- As a site administrator, set the value
of the GlobalCache: Implementation Class system property to com.comergent.globalcache.DistributedCache.
This property is in the GlobalCache section of the system properties.
This tells the Sterling Configurator Visual Modeler to use the Ehcache configuration file
WEB-INF\properties\DistributedCache-Config.xml
. - Enable the DistributedEventService by
uncommenting the RefreshServiceHelper listener code in the
WEB-INF/web.xml
configuration file:<!-- Start of Listeners --> <listener>
<listener-class>
com.comergent.reference.appservices.cache.CacheManagersHelper
</listener-class></listener>
<!-- comment this out to allow preferences refresh event to propagate
to other nodes --><!-- <listener>
<listener-class>
com.comergent.reference.appservices.cache.RefreshServiceHelper
</listener-class></listener> --> <listener>
<listener-class>
com.comergent.dcm.core.SessionMonitor
</listener-class></listener> <!-- End of Listeners -->
- As a site administrator, set the value of the cronRefreshTime property. The cronRefreshTime property specifies the polling interval, in seconds, at which a node should check for modified or added cron jobs. Set the value of this property in the Job Scheduler refresh time in seconds field of the Job Scheduler section of the system properties. The default value, -1, prevents the node from periodically checking for changes to cron jobs.
- As a site administrator, set the value
of the GlobalCache: Implementation Class system property to com.comergent.globalcache.DistributedCache.
This property is in the GlobalCache section of the system properties.
- By default, distributed nodes are discovered
automatically using the Ehcache configuration for both the GlobalCache
and EventService. However, you can also modify the cacheManagerPeerProviderFactory
property settings for multicastGroupAddress and multicastGroupPort
in the
\WEB-INF\properties\DistributedGlobalCache-Config.xml
andWEB-INF\classes\DistributedEventService-config.xml
files to specify the unique IP addresses and ports for a cluster to adjust the scoping of the discovery mechanism.<cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.1, multicastGroupPort=4567, timeToLive=1" />
You can also modify the timeToLive property setting to restrict how far packets should go. The setting values are:
- 0 - the same host
- 1 - the same subnet
- 32 - the same site
- 64 - the same region
- 128 - the same continent
- 255 - unrestricted
The default timeToLive value is 1, the same subnet.
The GlobalCache and EventService configuration must be the same on each cluster node, and must be unique for each cluster. For example, if you have two separate clusters, each cluster's configuration must be consistent across that cluster's nodes. The clusters themselves must each have unique configurations so that they do not conflict.
- Copy the
prefs.xml
configuration file to a shared location which is visible to all member machines of the cluster. The location of the file must be specified in the startup script for each cluster member as follows:-Dcomergent.preferences.store=<Path to prefs.xml>
- Configure the cluster to check for new and
updated files as soon as possible. This ensures that all servers are
in sync and will serve the same information to customers accessing
your site. This is especially important in ensuring that the latest
generated product index file is available at all times.
Place your configuration property XML files in a shared location accessible by all member machines of the cluster. Then, activate the AutoReload element of the
SearchConfigurationProperties.xml
configuration file as follows:<AutoReload activated="true" reloadFilePeriod="30"/>
This activates the AutoReload function and instructs the cluster to check for updates every 30 seconds.
- Follow any remaining steps required by your
servlet container or load balancer to implement their specific solution.
See the topics pertaining to setting up a WebLogic cluster and WebSphere® cluster.
Contact your IBM® representative for information about setting up other clustering architectures.