Planning a configuration with more than one broker server

The number of broker servers in an instance influences its availability and message throughput:
  • If more than one broker server runs concurrently, and if one server fails, another broker server can assume its workload. This improves overall system availability.
  • Increasing the number of broker servers can increase throughput. To maximize the throughput increase, run each broker server in a separate LPAR.
Note: The Timer service must run only in the primary server of an instance.
A configuration with more than one broker server requires:
An IBM® MQ cluster
Each broker server requires its own queue manager, and these queue managers must all be members of the same cluster (see Figure 1). For more information, see Preparing a queue manager for each broker server.
A DB2® data sharing group
Each broker server must be connected to a DB2 subsystem that is located within the same z/OS® image, and these DB2 subsystems must all be members of the same DB2 data sharing group. For more information about DB2 data sharing, see DB2: Data Sharing: Planning and Administration.
Figure 1. Example of a configuration with two broker servers that use an MQ cluster
Figure showing an example of a configuration with two broker servers that use an MQ cluster
Figure 2. Example of a configuration with two broker servers that use a queue sharing group
Figure showing an example of a configuration with two broker servers that use a queue sharing group

In Figure 1 and Figure 2:

Integration Nodes
The integration nodes in which the FTM SWIFT message flows run have the names MQM1BRK and MQM2BRK.
Broker servers
The broker servers in which the integration nodes run have the same names as the integration nodes themselves (MQM1BRK and MQM2BRK). All broker servers must use the same level of FTM SWIFT code.
Broker-related queue managers
Queue managers MQM1 and MQM2 host the queues used by the integration nodes.
Gateway queue managers
Queue managers MQG1 and MQG2 serve as gateways to distribute the workload to both broker servers.
Queue manager cluster
In Figure 1, all four queue managers are members of the queue manager cluster specified by the placeholder DNIvCLUS. FTM SWIFT applications and the FTM SWIFT command-line interface are connected to either gateway queue manager, which directs each inbound message to either MQM1 or MQM2.
Queue sharing group
In Figure 2, the queue managers MQM1 and MQM2 are members of the queue sharing group GRPA. FTM SWIFT applications and the FTM SWIFT command-line interface can be connected to MQM1, MQM2, or GRPA.
DB2 data sharing group
The brokers have connections to their local DB2 subsystems, which are members of a DB2 data sharing group. The FTM SWIFT runtime data is located in table spaces shared by both members (DSN1 and DSN2).
HFS (or zFS)
The FTM SWIFT message flows and the code of the Java™ database routines are stored in a common, shared hierarchical file system (HFS) or z/OS file system (zFS). Using a shared HFS (or zFS) saves time, because the FTM SWIFT libraries need be installed only once, rather than once per image. It also ensures that both servers use the same version of the programs delivered.
For your configuration, determine:
  • The number of servers you will employ
  • The number of gateway queue managers you will need:
    • When you use a queue manager cluster, you typically have one gateway on each z/OS system on which you run an FTM SWIFT application or the CLI.
  • Whether you deploy all the service bundles for all OUs to every server of your instance to increase availability, or to selected servers only.
  • Whether you connect an FTM SWIFT application or the CLI to a gateway queue manager or directly to the queue manager of an integration node:
    • If you connect to a gateway queue manager, the gateway automatically routes messages to servers for processing.
    • If you connect directly to the queue manager of an integration node, you explicitly select which server is to receive each message.