Horizontal clustering

You can enable multiple appsvr, eventprocessor, queuemanager, or scheduler services on multiple workstation to increase the capacity of your system installation.

The following figure shows a horizontally clustered environment where multiple services exist on multiple application servers:

Horizontal clustering
Restriction: The following restrictions apply to horizontal clustering:
  • Each workstation must run rmiregistry
  • Each workstation must run at least the admin service
  • Each workstation in the cluster requires a separate directory for configuration files (the contents of <install dir>/bin/conf) and a separate logging directory.

To tune a complex installation, you implement multiple services and spread them across multiple systems. Tuning a complex installation is the same as tuning single application servers, but tuning a complex installation might also involve using a hardware load balancer that routes user HTTP requests to a pool of application servers.

To tune an application server “pool”, you:
  • Plan the location and number of services
  • Tune individual servers

Planning the location and number of application servers for scaling

In a system deployment that involves more than one application server, each application server must run one admin and one rmiregistry service. The appsvr, eventprocessor, queuemanager, or scheduler services can be instantiated multiple times on a single or on multiple physical systems and must be instantiated at least once. However, the services that do the bulk of the work are the appsvr and the scheduler services. You typically need only one eventprocessor service and one queuemanager service.

Given these restrictions, best practices are:
  • Run the eventprocessor and queuemanager services on any workstation with any other service. These services are not “heavy” services.
  • If a system runs the scheduler and appsvr services, use one or more dedicated systems for the scheduler. The application servers that you dedicate for the scheduler service must also run the admin and rmiregistryservices. If memory and CPU capacity exists, multiple schedulers can run on the same workstation.
  • If possible, do not run the appsvr service on a workstation that also runs the scheduler.
  • To improve response time for users, use multiple appsvr services. These appserver services can be on a single workstation, or on multiple workstations, or both. If possible, do not run the appsvr and scheduler services on the same system.

Tuning individual application servers

Tuning the application servers in the pool is similar to tuning a stand-alone application server. Although fewer services might be running on a system, the practical maximum JVM size of 1.5 GB applies on 32-bit systems. If you have fewer services per system, you can use smaller individual systems where applicable.

Exception: In an environment with multiple application servers, the binary files and document store must be on a shared file system, most likely NFS. The connection between each application server and the NFS server must be examined for performance. Because Product Master does not create a high demand on the disk, it is possible to use one of the application servers as the NFS server. You must ensure that the NFS server is robust because the entire installation fails if the NFS server fails.

Configuring member workstations

On each workstation in the cluster, you need to configure the member workstations and every system must run at least the admin service.

Procedure
  1. Create the init script.
    1. In the Product Master user .bashrc file on each workstation, add the environment variable CCD_CONFIG_DIR and set it to the configuration directory.
    2. Log out and log in or source the init script.
  2. Set your runtime parameters.
    1. Create an env_settings.ini file in the configuration directory.
    2. Set the log_dir parameter in the [env] section of the env_settings.ini file to the logging directory.
      Note: If you want to see the same log files for all of the services together, ensure that the logging directory is shared across all of the workstations in the cluster.
    3. Define the services to be run on each system.
    4. Run setup.sh for each system.
    5. Run configureEnv.sh for each system.
  3. Update the admin_properties.xml file.
    • On one system, edit the <install dir>/etc/default/admin_properties.xml file and add the host name of each node.

The following example depicts a horizontal cluster using the following configuration:

  • IBM® Product Master is installed in the /usr/local/mdmpim directory. This directory is shared between all nodes and is available at /usr/local/mdmpim on all nodes. The Product Master user has read, write, and run permissions to the directory and all the files and sub-directories.
  • The Product Master user name is mdmpim
  • The Product Master user's directory is /home/mdmpim
  • The cluster consists of three systems:
    • node1.mycompany.com
    • node2.mycompany.com
    • node3.mycompany.com
  • The logging directory is /home/mdmpim/logs
  • The configuration directory is /home/mdmpim/config
  • Node1 runs the appserver. Node 2 runs the workflowengine and a scheduler. Node 3 runs only a scheduler service.
Procedure
  1. Create the logging directory. On all three nodes, run the mkdir /home/mdmpim/logs command.
  2. Create the configuration directory. On all three nodes, run the mkdir /home/mdmpim/config command.
    1. On node1, run the cp –r /usr/local/mdmpim/bin/conf/* /home/mdmpim/config command.
    2. On node1, run the rm –fr /usr/local/mdmpim/bin/conf/* command.
  3. Configure the Product Master user's environment on all three nodes:
    1. Edit the $HOME/.bashrc directory.
    2. Set and export PERL5LIB and LANG.
    3. Set and export CCD_CONFIG_DIR=/home/mdmpim/config.
    4. Log out and log in.
  4. Configure runtime parameters on all three nodes:
    1. Create and edit the env_settings.ini file.
    2. Uncomment and set log_dir=/home/mdmpim/logs.
    3. Configure services.
      1. On node1, edit the [services] section to read:
        admin=admin
        eventprocessor=eventprocessor
        queuemanager=queuemanager
        appsvr=appsvr
      2. On node2:
        admin=admin
        scheduler=scheduler
        workflowengine=workflowengine
      3. On node3:
        admin=admin
        scheduler=scheduler
    4. Set the appserver and db parameters.
  5. Start services on all three nodes, change to the <install dir>/bin/go directory and run the start_local.sh script.