You can setup a load balancing cluster of portal nodes
with identical configurations to evenly distribute user sessions.
Load balancing is ideal for Tivoli Integrated Portal installations
with a large user population. When a node within a cluster fails,
new user sessions are directed to other active nodes.
Work
load is distributed by session, not by request. If a node in the cluster
fails, users who are in session with that node must log back in to
access the Tivoli Integrated Portal. Any unsaved
work is not recovered.
Synchronized data
After
load balancing is set up, changes in the console that are stored in
global repositories are synchronized to all of the nodes in the cluster
using a common database. The following actions cause changes to the
global repositories used by the console. Most of these changes are
caused by actions in the
Settings folder in
the console navigation.
- Creating, restoring, editing, or deleting a page.
- Creating, restoring, editing, or deleting a view.
- Creating, editing, or deleting a preference profile or deploying
preference profiles from the command line.
- Copying a portlet entity or deleting a portlet copy.
- Changing access to a portlet entity, page, external URL, or view.
- Creating, editing, or deleting a role.
- Changes to portlet preferences or defaults.
- Changes from the Users and Groups applications,
including assigning users and groups to roles.
Note: Global repositories should never be updated manually.
During
normal operation within a cluster, updates that require synchronization
are first committed to the database. At the same time, the node that
submits the update for the global repositories notifies all other
nodes in the cluster about the change. As the nodes are notified,
they get the updates from the database and commit the change to the
local configuration.
If data fails to be committed on any given
node, a warning message is logged into the log file. The node is prevented
from making its own updates to the database. Restarting the Tivoli Integrated Portal Server instance on the node
rectifies most synchronization issues, if not, the node should be
removed from the cluster for corrective action. See Monitoring a load balancing cluster for more information.
Note: If
the database server restarts, all connections from it to the cluster
are lost. It may take up to five minutes for connections to be restored,
so that users can again perform update operations, for example, modifying
or creating views or pages.
Manual synchronization and maintenance
mode
Updates to deploy, redeploy, or remove console modules
are not automatically synchronized within the cluster. These changes
must be performed manually at each node. For deploy and redeploy operations,
the console module package must be identical at each node.
When
one of the deployment commands is started on the first node, the system
enters maintenance mode and changes to the global repositories
are locked. After you finish the deployment changes on each of the
nodes, the system returns to an unlocked state. There is not any restriction
to the order that modules are deployed, removed, or redeployed on
each of the nodes.
While in maintenance mode, any attempts to
make changes in the portal that affect the global repositories are
prevented and an error message is returned. The only changes to global
repositories that are allowed are changes to a user's personal portlet
preferences. Any changes outside the control of the portal, for example,
a form submission in a portlet to a remote application, are processed
normally.
The following operations are also not synchronized
within the cluster and must be performed manually at each node. These
updates do not place the cluster in maintenance mode.
- Deploying, redeploying, and removing wires and transformations
- Customization changes to the console user interface (for example,
custom images or style sheets) using consoleProperties.xml.
To reduce the chance that users could establish sessions with
nodes that have different wire and transformation definitions or user
interface customizations, schedule these changes to coincide with
console module deployments.
Requirements
The following requirements
must be met before load balancing can be enabled:
- Lightweight Directory Access Protocol (LDAP) must be installed
and configured as the user repository for each node in the cluster.
For information about which LDAP servers you can use, see List of supported software for WebSphere Application
Server V7.0. See Configuring LDAP user registries for instructions
on how to enable LDAP for each node.
- A front-end network dispatcher (for example, IBM HTTP Server) must be setup to handle
and distribute all incoming session requests. See Setting up intermediary services for more
information about this task.
- DB2 Version 9.7 must be installed within the network to synchronize
the global repositories for the console cluster.
- Each node in the cluster must be enabled to use the same LDAP
using the same user and group configuration.
- All console nodes in load balancing cluster must be installed
in the same cell name. After console installation on each node, use
the -cellName parameter on the manageprofiles command.
- All console nodes in load balancing cluster must have synchronized
clocks.
- The Websphere application server and Tivoli Integrated Portal Server versions must have
the same release level, including any fix packs. Fixes and upgrades
for the runtime must be applied manually at each node.
- Before joining nodes to a cluster, in each case make sure the
node uses the same file-based repository user ID, which has been assigned
the role of iscadmins.