Update: For corresponding info for WPS 7.0, see Creating a Network Deployment configuration.
I've talked about WebSphere Process Server (WPS) and the latest version 6.2. The simplest way to install a WPS runtime is a single server on a single node in a single cell, which is perfectly adequate for unit testing (and in fact is the topology for the WPS test server in WebSphere Integration Developer (WID)). However, this is not the best set-up for production. In production, you usually want a basic amount of high availability (HA) so that your users still get service even when part of your infrastructure goes down.
We have a recommended "golden topology" for WPS that, among other things, provides a pretty good level of HA. It's documented in the InfoCenter in "Tutorial: Building clustered topologies in WebSphere Process Server" and is summarized in this picture:
This topology is also explained in the article "Building clustered topologies in WebSphere Process Server V6.1" by my ISSW colleague Michele Chilanti. (In fact, I think Michele's article is the inspiration for the InfoCenter tutorial.)
Let's take a quick look at what's in the golden topology. Two details to observe are that the cell contains two nodes and three clusters. Why is that?
- Nodes -- The cell contains two nodes which should be installed on two different host machines (or perhaps two LPARs on the same host machine). This way, if one node crashes or has to be taken down for maintenance, the work will fail over to the other node and the users will still be able to do their work. During normal operation, workload management (WLM) will distribute requests across both nodes (and in fact also handles the failover when one node fails). An even better topology might have three or four nodes for increased redundancy.
- Clusters -- Since the cell contains multiple nodes, the application should be deployed not in a single server running on a single node, but in a cluster with cluster members (aka application servers) on multiple nodes. In WPS, it's helpful to actually use three clusters, one for the business logic parts of the application and two for some of the main WPS infrastructure.
- Business logic cluster -- This is where you deploy your SCA modules, including business process modules that run in the Business Flow Manager and Human Task Manager.
- SIBus engine cluster -- This is for WPS to host messaging engines that are part of the service integration bus. A separate cluster is the easiest way to enable all of the business logic app instances to access the messaging engines even if/when they fail over.
- CEI engine cluster -- This is for WPS to report what's going on for monitoring by products like Tivoli Composite Application Manager (ITCAM) (for IT monitoring) and WebSphere Business Monitor (for business monitoring). The separate CEI cluster helps lessen monitoring's interference with the business application.
So when deciding how to install WPS in production, the golden topology is at least a good start.