By default, the global cache is turned off, and the cache policy
is set to disabled. To use the global cache,
select an integration node cache policy by using the cachePolicy parameter. IBM Integration Bus has a default cache policy
that creates a default topology of cache components in a single
integration node. The default topology puts catalog servers
and container servers in integration servers dynamically so
that the cache is available for use by all integration servers
in the integration node. Integration node properties are available
to specify a range of ports, and a listener host for the default
topology. The integration node sets a range of ports to use,
but you can specify a particular range of ports by using the
cachePortRange parameter. You can use
the listenerHost parameter to specify
the listener host that is used by the cache components. If
your computer has more than one host name, setting the listener
host ensures that the cache components use the correct host
name.
If you set the cache policy to none,
you must set the integration server properties explicitly.
The properties that were set most recently by the integration
node policy are used as a starting point. Therefore, if you
set the cache policy to default first, then
switch to none, the default topology properties
are retained.
You can configure the global cache to
span multiple integration nodes by setting the cache policy
to the fully qualified name of an XML policy file. This policy
file lists the integration nodes that share the cache, and
for each integration node specifies the listener host, port
range, and the number of catalog servers hosted. You can use
the policy file to set up a single integration node that hosts
two catalog servers. If one catalog server is stopped, the
integration node switches to the other catalog server, ensuring
that no cache data is lost. You can also use the policy file
to configure a multi-instance integration node to host more than
one container server. If the active instance of the multi-instance
integration node fails, the global cache switches to the
container server in the standby instance.
If you
set the cache policy to disabled, all cache components
in the integration node are disabled. The disabled policy
is the default setting.
For more information, see Configuring the embedded global cache and Parameter values for the cachemanager component.
The cache manager is the integration server resource that manages
the cache components that are embedded in that integration server.In
the default topology, one integration server in the integration node
hosts a catalog server, and up to three other integration servers
in that integration node host container servers. All integration servers
can communicate with the global cache, regardless of whether they
are hosting catalog servers, container servers, or neither. Each integration
server contains a cache manager, which manages the cache components
that are embedded in that integration server. When you turn off the
default topology, configure the integration servers by setting the
parameter values for the cachemanager component.
For more information,
see Configuring the embedded global cache and Parameter values for the cachemanager component.
A container server is a component that is embedded in the integration
server that holds a subset of the cache data. Between them, all container
servers in the global cache host all of the cache data at least once.
If more than one container exists, the default cache policy ensures
that all data is replicated at least once. In this way, the global
cache can cope with the loss of container servers without losing data.You can host more than one container server in a multi-instance
integration node. If the active instance of the multi-instance integration
node fails, the global cache switches to the container server in the
standby instance.
The catalog server controls placement of data and monitors the
health of containers. You must have at least one catalog server
in your global cache. To avoid losing cache data when a
catalog server is lost, use a policy file to specify more
than one catalog server for an integration node. For example,
if you specify two catalog servers for a single integration node,
if one catalog server fails, the integration node switches to the
other catalog server. If the cache is shared by two integration nodes,
each of which hosts a catalog server, if one catalog server fails,
the integration nodes switch to the remaining catalog server. Having
more than one catalog server can affect startup time until the
cache is available. If you have more than one catalog server, you
must start at least two of them for the cache to be available. When
you configure a cache across multiple integration nodes with multiple
catalog servers, if you need to start one integration node before
the others then you can configure this integration node to host
two catalog servers. You cannot host catalog servers in a multi-instance
integration node.
When you are using multiple catalog
servers, you can improve performance by taking the following steps:
- Provide other integration servers that host container servers
only, rather than having only integration servers that host
both catalog and container servers.
- Start and stop integration servers in sequence, rather than using
the mqsistart or mqsistop commands to
start or stop all integration servers at once. For example,
start the integration servers that host catalog servers
before you start the integration servers that host only
container servers.
When you are using a global cache that spans multiple integration
nodes, ensure that all WebSphere eXtreme
Scale servers
that are clustered in one embedded grid use the same domain
name. Only servers with the same domain name can participate
in the same grid. WebSphere eXtreme
Scale clients
use the domain name to identify and distinguish between embedded
grids. If you do not specify a domain name in the integration
server or integration node policy file, the integration node
creates a name that is based on the server names of the catalog
servers. By default, each server starts with a domain name
that is derived by the integration node. In previous versions of IBM Integration Bus, the domain name for all WebSphere eXtreme
Scale servers in all embedded caches
was an empty string. Servers in different domains cannot collaborate
in the same grid. Therefore, for a cache that spans more than
one integration node, migrate those integration nodes at the same
time.
Data is stored in maps. A map is a data structure that maps keys
to values. One map is the default map, but the global cache can have
several maps.The cache uses WebSphere eXtreme
Scale dynamic
maps. Any map name is allowed, apart from names that begin with SYSTEM.BROKER,
which is reserved for use by the integration node. The default map
is named SYSTEM.BROKER.DEFAULTMAP; you can use or clear this map.