Load balancing for Dashboard Application Services Hub

You can set up a load balanced cluster of console nodes with identical configurations to evenly distribute user sessions.

Load balancing is ideal for Dashboard Application Services Hub installations with a large user population. When a node within a cluster fails, new user sessions are directed to other active nodes.

You can create a load balanced cluster from an existing stand-alone Jazz™ for Service Management application server instance. Its custom data is added to the central repository and subsequently replicated to new nodes as they are added to the cluster. If you want to add a node to a cluster and the node contains custom data, you must export the data before you join the node to the cluster. The exported data is later imported to one of the nodes in the cluster so that it is replicated across the other nodes in the cluster.

Work load is distributed by session, not by request. If a node in the cluster fails, users who are in session with that node must log back in to access the Dashboard Application Services Hub. Any unsaved work is not recovered.

Restriction: Before installing a fix pack in a Jazz for Service Management Version 1.1.1 or earlier load balanced environment, you must remove all nodes from the load balanced cluster using the procedure relevant to the existing environment. After removing all nodes from the cluster, you must install the fix pack on each node so that they are at the same release level of Dashboard Application Services Hub. You can recreate the load balanced cluster after updating each node. This restriction does not apply to Jazz for Service Management Version 1.1.2 or later.

Synchronized data

After load balancing is set up, changes in the console are stored in global repositories. These changes are synchronized to all of the nodes in the cluster using a common database. The following actions cause changes to the global repositories used by the console. Most of these changes are caused by actions in the Settings folder in the console navigation.
  • Creating, restoring, editing, or deleting a dashboard.
  • Creating, restoring, editing, or deleting a view.
  • Creating, editing, or deleting a preference profile or deploying preference profiles from the command line.
  • Copying a widget entity or deleting a widget copy.
  • Changing access to a widget entity, dashboard, external URL, or view.
  • Creating, editing, or deleting a role.
  • Changes to widget preferences or defaults.
  • Changes from the Users and Groups applications, including assigning users and groups to roles.
Note: Global repositories should never be updated manually.

During normal operation within a cluster, updates that require synchronization are first committed to the database. At the same time, the node that submits the update for the global repositories notifies all other nodes in the cluster about the change. As the nodes are notified, they get the updates from the database and commit the change to the local configuration.

If data fails to be committed on a node, a warning message is logged in the log file. The node is prevented from making its own updates to the database. Restarting the Jazz for Service Management application server instance on the node rectifies most synchronization issues, if not, the node must be removed from the cluster for corrective action. For more information, see Maintaining a load balancing cluster.

Note: If the database server restarts, all connections from it to the cluster are lost. It can take up to 5 minutes for connections to be restored, before users can again perform updates, for example, modifying dashboards.

Manual synchronization and maintenance mode

Updates to deploy, redeploy, or remove console modules are not automatically synchronized within the cluster. These changes must be performed manually at each node. For deployment and redeployment operations, the console module package must be identical at each node.

When one of the deployment commands is started on the first node, the system enters maintenance mode and changes to the global repositories are locked. After you finish the deployment changes on each of the nodes, the system returns to an unlocked state. There is not any restriction to the order that modules are deployed, removed, or redeployed on each of the nodes.

After the first node is deployed, notifications are sent to the other nodes in cluster.
  • If a node's module package and console version does not match those of the upgraded node, then the node goes into maintenance mode until it is upgraded. During maintenance mode no write operations to the database can be performed from a node. When node is upgraded it returns to an unlocked state.
  • If a node's module package and console version does match those of the upgraded node, the node continues to work normally.

While in maintenance mode, any attempts to make changes in the console that affect the global repositories are prevented and an error message is returned. The only changes to global repositories that are allowed are changes to a user's personal widget preferences. Any changes outside the control of the console, for example, a form submission in a widget to a remote application, are processed normally.

The following operations are also not synchronized within the cluster and must be carried out manually for each node. These updates do not place the cluster in maintenance mode.
  • Deploying, redeploying, and removing wires and transformations
  • Customization changes to the console user interface (for example, custom images or style sheets) using consoleProperties.xml.
To reduce the chance that users establish sessions with nodes that have different wire and transformation definitions, or user interface customizations, schedule changes to coincide with console module deployments.

Requirements

The following requirements must be met before load balancing can be enabled: