Installation topology options
You can set up your installation in various configurations. Select a configuration that best suits your production and facility needs.
The configuration of the nodes of your installation depends on the following factors:
- Throughput capacity
- High availability
- Physical server resources
- Flexibility in relocating nodes to different physical servers
Use the following guidelines when you are deciding what installation to use:
- You can install a single node or a multiple node cluster. Tip: IBM® suggests that you install a multiple node cluster to take advantage of the high availability features of AS4 Microservice. You can install a single node and later add a node to make a cluster installation.
- Each node can contain up to four members, with no more than one member of a particular type (informational, operational, catalog, container). By default, all four member types are included in the software that is used to install a node. For more information about member types, see Installation architecture.
- You have the following options for a single node installation that includes all four member types. These installations do not have high availability because you cannot shift work to another node and you cannot upgrade the node without stopping the whole installation:
- High availability installations include the following options, which are two of the many options that you have for high availability installations:
Nonproduction proof of concept node with no high availability
The proof of concept installation has the following features:
- All four member types are in a single node.
- The message provider is a built-in installation of WebSphere® eXtreme Scale.
- The members connect to a built-in Derby database.
- The node can connect to a perimeter server.
The proof of concept installation has the following benefits:
- You can install it quickly.
- You need only one server to install it.
- You do not need to set up an external database or message provider.
The proof of concept installation has the following risks:
- It does not have high availability because of the following reasons:
- The work on a failed node cannot be picked up by another node.
- You cannot upgrade the node without stopping the only node in the installation.
- You cannot pre-configure the database or message provider.

Single nonproduction node with no high availability
The single node installation has the following features:
- Up to four member types are in a single node.
- The message provider is a separate installation of WebSphere MQ.
- The members connect to a database that is separately installed.
- The node can connect to a perimeter server.
A single node installation with all four members has the following risks:
- It does not have high availability because of the following reasons:
- The work on a failed node cannot be picked up by another node.
- You cannot upgrade the node without stopping the only node in the installation.
- There is less capacity than a minimum or recommended high availability installation.
The following graphic shows a single node installation with all four members:

Minimum high availability production or nonproduction cluster
The minimum high availability installation has the following features:
- A total of four member types across all the nodes of a clustered installation, and not more than one of each member type per node. Tip: IBM suggests that cluster installations use a network load balancer to distribute the workload among nodes.
- The message provider is a separate installation of WebSphere MQ.
- The operational and data grid members connect to an operational database, and the informational members connect to an informational member, each of which are separately installed.
- A total of at least three catalog members, one catalog per node.
Important: AS4 Microservice uses the latest version of WebSphere eXtreme Scale, which now supports majority quorum. Majority quorum requires that more than half of the catalog members must be available to avoid quorum loss. WebSphere eXtreme Scale might lose quorum if a catalog member fails and the system is also unable to connect with the container members. WebSphere eXtreme Scale might also lose quorum in the case of a brownout, a temporary loss of connectivity, or network partition. Quorum is not affected if you stop the catalog member. For more information about stopping members in AS4 Microservice, see ../reference/as4/meg_member_stop_command.html.
Increasing the minimum number of catalog members per cluster prevents quorum loss, in which you might be unable to access and manage AS4 Microservice through the user interface (informational member). You might also encounter split-brain issues, an error state in which the system receives conflicting or outdated information as a result of only half of the available catalog members during a network partition.
- The entire cluster connects to a perimeter server.
The minimum high availability installation has the following benefits:
- The three servers in this installation help you avoid a single point of failure even after the loss of one server. This setup is good for planned maintenance because you can take down one server and still be prepared for an unplanned outage on another server. The server with the single catalog member ensures that majority quorum for WebSphere eXtreme Scale is still achieved if there is an outage on one of any of the servers.
- At least three catalog members are distributed across the cluster so that majority quorum is achieved if one member is unavailable.
- The number of nodes that you use to achieve high availability are minimized. This setup also helps you to minimize the performance tuning.
- Each container member (and the containing node) can be allocated enough memory to contain the entire cache in random access memory (RAM) when the other container member is unavailable.
- This installation is a balanced approach between the high availability topology and the single member per node topology.
- Smaller nodes allow for relocation to different servers.
- The loss of one member, node, or server has a minimal impact on performance.
- Each physical server can have a smaller resource footprint. If a server fails, it might be easier to replace the server instead of trying to recover the node.
The minimum high availability installation has the following risks:
- You must have three servers for the installation.
- There are many interdependencies between components, which increases the complexity of troubleshooting problems.

Recommended high availability production or nonproduction cluster
The recommended high availability installation has the following features:
- The informational and operational members are installed on separate nodes from the data grid members (catalog and container).
- All member types are installed on more than one node, which contributes to high availability.
- The message provider is a separate installation of WebSphere MQ.
- The members connect to a database that is separately installed.
- The cluster can connect to a perimeter server.
The recommended high availability installation has the following benefits and risks:
- The five servers in this installation help you avoid a single point of failure even after the loss of one server. This setup is good for planned maintenance because you can take down one server and still be prepared for an unplanned outage on another server.
- Each container member (and the containing node) can be allocated enough memory to contain the entire cache in random access memory (RAM) when the other container member is unavailable.
- Smaller nodes allow for relocation to different servers.
- Catalog servers are distributed on servers that are unlikely to fail together.
- The loss of one member, node, or server has a minimal impact on performance.
- Each physical server can have a smaller resource footprint. If a server fails, it might be easier to replace the server instead of trying to recover the node.
The recommended high availability installation has the following risks:
- You must have five servers for the installation.
- There are many interdependencies between components, which increases the complexity of troubleshooting problems.
