Load balancing for Dashboard Application Services Hub
You can set up a load balanced cluster of console nodes with identical configurations to evenly distribute user sessions.
Load balancing is ideal for Dashboard Application Services Hub installations with a large user population. When a node within a cluster fails, new user sessions are directed to other active nodes.
You can create a load balanced cluster from an existing stand-alone Jazz™ for Service Management application server instance. Its custom data is added to the central repository and subsequently replicated to new nodes as they are added to the cluster. If you want to add a node to a cluster and the node contains custom data, you must export the data before you join the node to the cluster. The exported data is later imported to one of the nodes in the cluster so that it is replicated across the other nodes in the cluster.
Work load is distributed by session, not by request. If a node in the cluster fails, users who are in session with that node must log back in to access the Dashboard Application Services Hub. Any unsaved work is not recovered.
Synchronized data
- Creating, restoring, editing, or deleting a dashboard.
- Creating, restoring, editing, or deleting a view.
- Creating, editing, or deleting a preference profile or deploying preference profiles from the command line.
- Copying a widget entity or deleting a widget copy.
- Changing access to a widget entity, dashboard, external URL, or view.
- Creating, editing, or deleting a role.
- Changes to widget preferences or defaults.
- Changes from the Users and Groups applications, including assigning users and groups to roles.
During normal operation within a cluster, updates that require synchronization are first committed to the database. At the same time, the node that submits the update for the global repositories notifies all other nodes in the cluster about the change. As the nodes are notified, they get the updates from the database and commit the change to the local configuration.
If data fails to be committed on a node, a warning message is logged in the log file. The node is prevented from making its own updates to the database. Restarting the Jazz for Service Management application server instance on the node rectifies most synchronization issues, if not, the node must be removed from the cluster for corrective action. For more information, see Maintaining a load balancing cluster.
Manual synchronization and maintenance mode
Updates to deploy, redeploy, or remove console modules are not automatically synchronized within the cluster. These changes must be performed manually at each node. For deployment and redeployment operations, the console module package must be identical at each node.
When one of the deployment commands is started on the first node, the system enters maintenance mode and changes to the global repositories are locked. After you finish the deployment changes on each of the nodes, the system returns to an unlocked state. There is not any restriction to the order that modules are deployed, removed, or redeployed on each of the nodes.
- If a node's module package and console version does not match those of the upgraded node, then the node goes into maintenance mode until it is upgraded. During maintenance mode no write operations to the database can be performed from a node. When node is upgraded it returns to an unlocked state.
- If a node's module package and console version does match those of the upgraded node, the node continues to work normally.
While in maintenance mode, any attempts to make changes in the console that affect the global repositories are prevented and an error message is returned. The only changes to global repositories that are allowed are changes to a user's personal widget preferences. Any changes outside the control of the console, for example, a form submission in a widget to a remote application, are processed normally.
- Deploying, redeploying, and removing wires and transformations
- Customization changes to the console user interface (for example, custom images or style sheets) using consoleProperties.xml.
Requirements
- You can create a load balanced cluster from an existing stand-alone Jazz for Service Management application server instance. Its custom data is added to the central repository and subsequently replicated to new nodes as they are added to the cluster. If you want to add a node to a cluster and the new node contains custom data, you must export the data before you join the node to the cluster. The exported data is later imported to one of the nodes in the cluster so that it is replicated across the other nodes in the cluster.
- Lightweight Directory Access Protocol (LDAP) must be installed and configured as the user repository for each node in the cluster. For information about which LDAP servers you can use, see List of supported software for WebSphere® Application Server V8.5. See Configuring LDAP user registries for instructions on how to enable LDAP for each node.
- A front-end Network Dispatcher (for example, IBM HTTP Server) must be set up to handle and distribute all incoming session requests. For more information about this task, see Setting up intermediary services.
- DB2® Version 10.1 must be installed within the network to synchronize the global repositories for the console cluster.
- From Jazz™ for Service Management Version 1.1.3 Fixpack 9 onwards, Dashboard Application Services Hub is supported on NETCOOL/OMNIBus ObjectServer Version 8.1. For more information, see the following:
- IBM Jazz for Service Management (JazzSM) v1.1.3.9/Dashboard Application Services Hub (DASH) v3.1.3.9: Steps to configure JazzSM/DASH High Availability on NETCOOL/OMNIbus ObjectServer v8.1.
- IBM Jazz for Service Management (JazzSM) v1.1.3.9/Dashboard Application Services Hub (DASH) v3.1.3.9: Steps to Migrate DB2 High Availability from existing DB2 connection to NETCOOL/OMNIbus ObjectServer v8.1.
- Each node in the cluster must be enabled to use the same LDAP with the same user and group configuration.
- All console nodes in a load balancing cluster must have synchronized clocks.
- The WebSphere Application Server and Jazz for Service Management application server versions must have the same release level, including any fix packs. Fixes and upgrades for the run time must be applied manually on each node.
- Before joining nodes to a cluster, make sure that each node uses the same file-based repository user ID, which is assigned the role of iscadmins.