IBM WebSphere Developer Technical Journal: Basic steps for clustering WebSphere Process Server

Set up a basic clustered IBM® WebSphere® Process Server installation using a step-by-step approach for a simple, yet robust, clustered topology that addresses both availability and scalability.

Share:

Michele Chilanti (chilanti@us.ibm.com), Consulting IT Specialist, IBM

Author photoMichele Chilanti is a Consulting IT Specialist with the IBM Software Services organization. He has over 15 years of experience working with a variety of products of the IBM software portfolio. Currently, he consults on a daily basis with IBM customers in the areas of business process modeling, implementation, and deployment. Michele regularly presents at conferences worldwide, and has authored a number of IBM and external technical publications.


developerWorks Contributing author
        level

19 April 2006

Also available in Chinese

From the IBM WebSphere Developer Technical Journal.

Introduction

Improving system availability and the ability to accommodate increased workload are the main objectives of server or process clustering. Server or process clustering can:

  • Increase the system's availability by providing redundant processes or hardware components that can ensure some level of continuity of service in case of failures.

  • Provide a mechanism to accommodate additional workload (scalability).

The concepts of failover and scalability are largely independent; you may find that a topology that ensures scalability may not be very good at ensuring availability, and vice-versa. IBM WebSphere Process Server provides many different ways to use clustering techniques to address availability and scalability.

The first objective of this article is to briefly present a few topological solutions for WebSphere Process Server clustering and to discuss the trade-offs of the various approaches. Subsequently, you will step through the process for setting up what could possibly become the most commonly adopted clustering topology for WebSphere Process Server.


Key topologies for WebSphere Process Server clustering

At a high level, every WebSphere Process Server environment involves three fundamental layers: WebSphere Process Server applications, a messaging infrastructure, and one or more relational databases, as shown in Figure 1.

Figure 1. Components to be clustered
Figure 1. Components to be clustered
  1. WebSphere Process Server applications. The WebSphere Process Server applications include the process server infrastructure code, such as the Business Process Choreographer, and any user applications that exploit the process server functions. A WebSphere Process Server application server must be installed with these applications. Conceptually, clustering of WebSphere Process Server applications is not significantly different from clustering a plain J2EE™ application in an IBM WebSphere Application Server V6 environment.

  2. One or more relational databases. WebSphere Process Server requires certain application configuration and runtime information to be stored in relational database tables. The messaging infrastructure, discussed next, also uses relational database tables for persistence. Clustering of relational databases for scalability and availability is a well established discipline, and so we will not spend time discussing techniques for clustering relational databases. However, we will discuss how to set up the necessary databases and schemas to support a WebSphere Process Server cluster.

  3. Messaging infrastructure. WebSphere Process Server also requires using a messaging infrastructure. Some of that messaging infrastructure must use WebSphere Application Server Service Integration Buses (SI Buses) and the WebSphere Default Messaging Java™ Message Service (JMS) Provider. In this article, we will not consider using other messaging providers for any of the messaging requirements of WebSphere Process Server. We are going to assume that you will be relying on the SI Buses entirely, which at this point, is the recommended practice.

Clustering the messaging infrastructure is perhaps the most complex aspect of the overall clustering discussion. In general, we can say that, since we use the SI Bus, which requires a WebSphere Application Server to run, the messaging infrastructure can also be clustered using the WebSphere clustering techniques. However, you need to understand a number of considerations when you select the topology to adopt. The next section discusses some of those considerations.


Clustering the messaging infrastructure

A WebSphere Application Server SI Bus lets you to define, among other things, destinations (such as queues or topics) that applications can use to send or retrieve messages. To make those destinations physically available, you can designate an application server process (or cluster of processes) where the messaging infrastructure can run. You can do this by adding a member to the SI Bus.

A member can be either a single server of a WebSphere Application Server Network Deployment cell, or a cluster of servers. When you add a member to the bus, a messaging engine is also created on the member. The messaging engine is the component in the application server process that implements the logic of the messaging infrastructure itself.

After you add a cluster as a member of an SI Bus, each cluster member is capable of running the messaging engine . However, only one cluster member will have an active messaging engine at any given time. The high-availability policy that applies to the messaging engine is a One-of-N policy.

There are two key options when it comes to clustering messaging engines:

  • You have a single messaging engine that gets created automatically when you add the cluster as a member of the SI Bus. As we mentioned, this operation creates a messaging engine, which in turn uses a One-of-N policy for high-availability, resulting in a single instance of the messaging engine being active.

    In this case, there is only one physical repository for the messages associated with the destination. This scenario ensures availability; however, scalability can only be achieved by providing additional computing power to the server (essentially, by configuring the application server on more powerful hardware).

    Figure 2. An active/stand-by clustered topology for messaging
    Figure 2. An active/stand-by clustered topology for messaging
  • Multiple messaging engines are active at the same time. Adding the cluster to the SI Bus results in the creation of the first messaging engine. You can also manually create additional messaging engines on that cluster for that SI Bus. Each messaging engine will operate with a One-of-N policy, but since you have multiple engines, you can now have multiple active instances. You can create your own high availability policies to define where each active instance should run by default, and thus evenly distribute the active engines across the various cluster members.

    This solution, however, implies that the destinations on the SI Bus are partitioned. In other words, each instance of the messaging engine controls and works with a portion of the entire queue. Such a topology does ensure scalability and some degree of availability. However,since there is no single repository for all messages in the queue, this configuration carries with it potential issues related to workload balancing, order of message processing, and orphaning of messages. You should limit this option to those cases where it is proven that acceptable throughput cannot be otherwise achieved (Figure 3).

    Figure 3. An active/active clustered topology with partitioned queues
    Figure 3. An active/active clustered topology with partitioned queues

Now that you understand the two fundamental options to cluster an messaging engine, let's discuss where the messaging engine can be located with respect to the WebSphere Process Server applications. The two options are:

  • The messaging engine is co-located with the WebSphere Process Server applications. In other words, the messaging engine runs within the same cluster as the WebSphere Process Server applications.

  • The messaging engine is located in its own cluster, separate from the WebSphere Process Server applications.

These two options lead to four possible combinations:

  1. The messaging engine and WebSphere Process Server applications are co-located. This option implies serious limitations on the overall scalability of the solution. These topologies, however, may be simpler to set up and manage. As a rule, avoid these specific topologies:

    1. Non-partitioned queues, active/stand-by topology. This topology is simple to set up and requires a small number of servers and single cluster. However, it has a major disadvantage. Only one message-driven bean (MDB) at a time will be active and able to consume messages. This may strongly limit the scalability factor of such configuration.

      Figure 4. The simplest clustering topology: active/stand by
      Figure 4. The simplest clustering topology: active/stand by
    2. Partitioned queues, active/active topology. This topology has the advantage (over the previous one) of providing a scalable environment where multiple MDBs are active at the same time, albeit on different partitions of the same queues. To ensure failover, however, you need to configure separate clusters for each messaging engine. Because of queue partitioning, there are potential issues related to workload balancing, order of message processing, and orphaning of messages.

      Figure 5. An active/active topology with partitioned queues
      Figure 5. An active/active topology with partitioned queues
  2. The messaging engine and WebSphere Process Server applications are located in separate clusters. Adopt this technique whenever practically possible.

    1. Non-partitioned queues, active/stand-by topology. This topology allows you to achieve true scalability of the WebSphere Process Server applications, because it enables multiple MDBs to be active at the same time.

      Figure 6. Segregating the messaging engines in a separate cluster
      Figure 6. Segregating the Messaging Engines in a separate cluster

      In addition, since the queues are non-partitioned, there are no special considerations as to workload balancing and orphaning of messages. It also lets you to tune and configure the WebSphere Process Server cluster independently of the messaging engine cluster. The only caveat is the scalability of the messaging engine -- which can only occur by placing it on a more powerful system.

    2. Partitioned queues, active/active topology. This topology allows full scalability and separate administration of the WebSphere Process Server cluster and of the messaging engine cluster. However, because of the partitioned destinations, there are potential issues related to workload balancing, order of message processing, and orphaning of messages. The setup of such topologies is also a complex task.

      Figure 7. Partitioned queues for maximum messaging scalability
      Figure 7. Partitioned queues for maximum messaging scalability

      This solution, although it is the most flexible from a scalability and failover standpoint, should be adopted only if it is proven that the key limiting factor of throughput is the messaging engine -- and with a thorough understanding of the limitations on the applications.


Clustering the relational databases

As we briefly mentioned above, ensuring the high availability and scalability of a relational database platform can be done through a number of known and proven techniques. However, this topic is outside the scope of this article.


Target topology

For our sample scenario, we adopted a topology with two separate clusters, one for the WebSphere Process Server applications, and one for the messaging engine. For the messaging engine, we chose the active/standby (1/N) default policy (non-partitioned destinations). Figure 8 shows a picture that summarizes the target topology.

Figure 8. The topology described in this article
Figure 8. The topology described in this article
  • On the box ISSW, we installed the deployment manager and the WebSphere Process Server cluster.
  • On the box ISSW2, we installed the messaging engine cluster.
  • On a third box, DBSrv, we have IBM DB2® and the databases needed for the topology.

For the sake of simplicity, we chose to document how to set up this topology. It is important to realize that the failure of either the ISSW or ISSW2 systems will cause an outage. With relatively minor changes to the following instructions, you can set up a similar, more available topology, as depicted in Figure 9.

Figure 9. A variation of the sample topology
Figure 9. A variation of the sample topology

This topology uses the same hardware, but it eliminates the single point of failure that results from creating a cluster that resides on a single physical box.


Install WebSphere Process Server

For this project, we installed WebSphere Process Server V6.0.1.0 on both the ISSW and the ISSW2 systems. Version 6.0.1 is a complete refresh of the whole WebSphere Process Server product and it is packaged also as a fully installable image. The installation procedure of WebSphere Process Server V6.0.1 also installs the prerequisite WebSphere Application Server Network Deployment V6.0.2.3. We chose the Custom Install path and selected not to install the samples (Figure 10).

Figure 10. Installing WebSphere Process Server
Figure 10. Installing WebSphere Process Server

When the installation completed, we selected not to run the profile creation wizard, which we ran manually in a subsequent step.

Of course, this is not the only way to achieve the target topology. In fact, WebSphere Process Server code is not needed on ISSW2, because the only purpose of that system is to run the messaging infrastructure. You can install WebSphere Application Server Network Deployment V6.0.2.3 for messaging, and then federate any such node to the WebSphere Process Server cell. You can also proceed by augmenting an existing installation of WebSphere Application Server to WebSphere Process Server on the nodes that need it.


Create the databases for WebSphere Process Server

A number of database tables are needed for WebSphere Process Server. You have the freedom to create one or more databases to host the various schemas that contain the tables. You need to find the right balance between creating many separate databases and creating all the schemas and tables in a single database. The latter practice is discouraged because you cannot individually tune each database. However in a large installation, you can have multiple independent environments (with multiple business process containers, and so on). You can end up with an extremely large number of databases if you choose to create a separate database for each component that needs it.

The recommended practice is to create one database for the messaging engines and one for the other WebSphere Process Server components (the WebSphere Process Server tables, BPEL, and so forth). You can then create different schemas within these databases.

Typically, you need to manually create the databases ahead of installing and configuring WebSphere Process Server.

For the objects within the databases, such as schemas and tables, you have two options:

  • Let the WebSphere Process Server administrative functions create the objects automatically for you. This implies that your database administrator agrees that the people who are responsible for configuring WebSphere Process Server are going to be creating database objects. In a real-life production environment, this is hardly ever the case.

  • Have the database administrator create databases, schemas, tables, and other objects for you. You will need to provide scripts for that.

We will follow a mixed approach. Some of the schemas are created manually, and the system creates others. However, we will indicates how to get hold of the scripts that perform the creation, should you want to follow a completely manual approach.

WebSphere Process Server needs the following schemas:

  • For the WPS repository. The default name for this database is WPRCSDB. Here, we created a new and empty DB2 database (ISSWWPS) and then let the profile wizard create the schema and the tables for us in it. You can create the tables when you create the deployment manager profile (explained later).

    As an alternative, you can use the SQL scripts located in: \WebSphere\ProcServer\profileTemplates\dmgr.wbiserver\actions\scripts .

  • For the business process container and human task container. You must create this schema manually. The scripts are in the \WebSphere\ProcServer\ProcessChoreographer directory.

    The example creates a new database called ISSWBPE and uses the script createDatabseDb2.ddl to create the tables. We edited the file and changed the first two SQL statements to indicate the desired database name:

    -- create the database
    CREATE DATABASE ISSWBPE USING CODESET UTF-8 TERRITORY en-us;
    -- Use CONNECT TO BPEDB USER xxx when another user should become owner of the schema
    CONNECT TO ISSWBPE;
    
    type=websphere.taskmanagement.taskstatechange	...userData={task.id=n, task.state=1
  • For each messaging engine. These schemas can be created automatically at the time you configure the messaging engine. However, you can also create them manually. In that case, you need to generate the scripts using the sibDDLGenerator command in the WebSphere/ProcServer/bin directory. For example, to generate a script that will create a schema called ISSWME, within the database ISSWMEDB, to which the user MICHELE is fully authorized, you can use the following command:

    sibDDLGenerator.bat -system db2 -schema ISSWME –user MICHELE -create  –statementend ; > ME.ddl

    This scenario creates a single database and four schemas (one per messaging engine). We edited the ME.ddl file obtained in the previous step and changed the schema name to match the desired value.

  • A Common Event Infrastructure (CEI) database may be needed for WebSphere Process Server. However, the scripts for this database aren't available until you have created a profile. You can create this database later in the section that configures CEI.

Table 1 lists the databases and schemas we created before creating the profiles.

Table 1. Databases, schemas, and their function
DatabaseSchemasFunction
ISSWWPSNone at this point; the profile creation wizard will create the schemas and tables.This is the WPS repository.
ISSWBPESame as the user ID utilized to connect to the database.Holds the process choreographer and human task manager information.
ISSWMEDBOne per messaging engine:
  • MEBPE for the choreographer
  • MECEI for the Common Event Infrastructure
  • MESCAAPP for the SCA application messaging engine
  • MESCASYS for the SCA system messaging engine
Holds the persistent information for the messaging engines.

Create the deployment manager profile

Create the WebSphere Process Server deployment manager profile:

  1. Launch the profile creation wizard from the directory: C:\WebSphere\ProcServer\bin\ProfileCreator_wbi. Make sure that you select this directory, and not the ProfileCreator directory, or you'll be creating the wrong deployment manager.

  2. Select Deployment manager profile in the profile type selection (Figure 11), then Next.

    Figure 11. Creating a Deployment Manager Profile
    Figure 11. Creating a Deployment Manager Profile
  3. Name your profile (for example, ISSWDmgr), then click Next. (Figure 12)

    Figure 12. Naming the Deployment Manager Profile
    Figure 12. Naming the Deployment Manager Profile
  4. Select the default profile directory (Figure 13). Click Next.

    Figure 13. Specifying the Deployment Manager Directory
    Figure 13. Specifying the Deployment Manager Directory
  5. Specify the host, node, and cell name (we left the defaults), then Next. (Figure 14)

    Figure 14. Specifying node, host, and cell name
    Figure 14. Specifying node, host, and cell name
  6. The subsequent step is important. It will ask you the user ID and password that the Service Component Architecture (SCA) infrastructure is going to use to connect to the system and application SI Buses (Figure 15). The user ID and password specified in the dialog need to be a valid WebSphere Application Server user ID and password. These credentials are typically totally independent of the credentials you will need to specify to connect to the various databases. For the purpose of this article, we assume that the same credentials are used for the SI Bus and the relational databases.

    Figure 15. Specifying Security Credential for the SCA SI Buses
    Figure 15. Specifying Security Credential for the SCA SI Buses
  7. Click Next. Now, you are asked whether you want to create a new database, or use an existing database. We have already created an empty database, so we selected Use an existing database. It's also important to notice that the automatic creation of the database is only allowed if the database is local. The wizard is smart enough to recognize that the database is empty, and will create the schema if necessary. If the database already contains the schema, the wizard will skip the creation (Figure 16).

    Figure 16. Selecting a Database for WebSphere Process Server
    Figure 16. Selecting a Database for WebSphere Process Server
  8. Click Next. Review the summary and click Finish.

    Figure 17. Reviewing the Deployment Manager Creation Options
    Figure 17. Reviewing the Deployment Manager Creation Options
  9. Once the profile has been created, you might want to check that the schemas and tables were created too. We did so by connecting to our database server and verifying the presence of the following tables in the ISSWWPS database (Figure 18):

    Figure 18. Tables created for the Deployment Manager
    Figure 18. Tables created for the Deployment Manager

    You can also do so by issuing this command:

    db2cmd
    db2 connect to ISSWWPS user <...> using <...>
    db2 list tables for schema <user name>

    There are also scripts available to populate the database. Those scripts are available in the directory <INSTALL_ROOT>/profileTemplates/dmgr.wbiserver/actions/scripts.

  10. Verify that you created the two SI Buses for SCA. Start the deployment manager, open the console, and select Service Integration => Buses. You should see two SI Buses (SCA.APPLICATION.<cell name>.Bus and SCA.SYSTEM.<cell name>.Bus).

    Figure 19. SI Buses created for SCA
    Figure 19. SI Buses created for SCA
  11. Verify that the J2C authentication aliases are in place. These are used in data sources and connection factories:

    1. Select Global security => J2C authentication data entries.

    2. You should have an SCA_Auth_Alias (to be used when connecting to the messaging engines) and a WPSDBAlias (for the WebSphere Process Server database).

      Figure 20: Authentication aliases created for the Deployment Manager Profile
      Figure 20: Authentication aliases created for the Deployment Manager Profile

Create a WebSphere Process Server custom profile

Now that you have a deployment manager profile, you can create the profiles for the nodes. Let's start with the WebSphere Process Server node. In our example, the WebSphere Process Server node co-exists with the deployment manager. In your topology, you may have multiple nodes residing on separate machines. In our scenario, there is just a single node for WebSphere Process Server and one for the messaging engine, but conceptually the same steps apply to larger configurations.

  1. Launch the WebSphere Process Server profile creation wizard from the directory: C:\WebSphere\ProcServer\bin\ProfileCreator_wbi.

  2. Select Custom profile, then Next.

    Figure 21. Creating a custom WebSphere Process server profile
    Figure 21. Creating a custom WebSphere Process server profile
  3. Select Federate this node later , and click Next (Figure 22).

    Figure 22. Options for the custom WebSphere Process Server profile
    Figure 22. Options for the custom WebSphere Process Server profile

    To federate the node immediately, the deployment manager process has to be up and running.

  4. Name this profile and make it the default (Figure 23). Click Next.

    Figure 23. Naming the custom WebSphere Process Server profile
    Figure 23. Naming the custom WebSphere Process Server profile
  5. Choose a directory for installation and click Next.

  6. Choose a node name and enter it in the system name (the default is usually fine), then click Next.

  7. Select the database type and driver location. For DB2, the driver is shipped with the product and no changes are usually needed.

    Figure 24. Selecting the database options for the custom WebSphere Process Server Profile
    Figure 24. Selecting the database options for the custom WebSphere Process Server Profile
  8. Review the summary and click Finish to complete the creation of the profile.

  9. Exit the wizard without further actions.


Create a WebSphere Application Server custom profile

In our topology, there is a single WebSphere Application Server node for messaging (the system ISSW2). On that node, we need to create a base WebSphere Application Server profile. There is no use in creating a WebSphere Process Server profile to run just the messaging infrastructure.

  1. Launch the WebSphere Application Server profile creation wizard from C:\WebSphere\ProcServer\bin\ProfileCreator. Note that this is a different wizard than the one we used to create our WebSphere Process Server profile.

  2. Choose Custom Profile. The profile types are presented in a different order when you compare this screen to the one we saw for WebSphere Process Server. Click Next.

    Figure 25: Creating a Plain WebSphere Application Server Profile
    Figure 25: Creating a Plain WebSphere Application Server Profile
  3. Choose the option to Federate this node later (To federate the profile right away, you need to have the deployment manager running), then Next. (Figure 26)

    Figure 26: Options for the WebSphere Application Server Profile
    Figure 26: Options for the WebSphere Application Server Profile
  4. Provide a name for your profile. Click Next.

  5. Verify (or change) the host name and node name. Click Next.

  6. Verify the summary and click Finish to complete the creation.

  7. Exit the wizard without further actions.


Add the nodes to the deployment manager

Now, you can federate the nodes to the cell using the addNode command.

  1. Start the deployment manager. On the system where the deployment manager profile was created (ISSW), run the startManager command out of \WebSphere\ProcServer\profles\ISSWDmgr\bin.

  2. When the Deployment Manager is up and running, on the same DM system (ISSW), add the WebSphere Process Server node to the cell:

    1. Change directories to: \WebSphere\ProcServer\profiles\ISSW01\bin.
    2. Run addNode issw.rchland.ibm.com.
    3. Wait for completion. The node agent is also started at this time.

    When you need to add multiple nodes, make sure that you add one node at a time. Wait until each addNode command has completed before issuing the subsequent addNode command.

  3. Switch to the system where the WebSphere Application Server profile was created (ISSW2) and federate that node:

    1. Change directories to \WebSphere\ProcServer\profiles\ISSWWAS\bin.
    2. Run addNode issw.rchland.ibm.com.
    3. c. Wait for completion. The node agent is also started at this time.
  4. Start the administrative console and verify that the two nodes have been federated. Select System administration => Nodes. You should see the two nodes.


Create the cluster for the messaging engines

  1. Create a WebSphere Application Server cluster for the messaging engines. This cluster doesn't need WebSphere Process Server capability, because it only runs the messaging infrastructure.

    1. Choose Servers => Clusters => New. Name your cluster MECluster. (Figure 27).

      Figure 27. Configuring the cluster for messaging
      Figure 27. Configuring the cluster for messaging
    2. Click Next. Create an initial cluster member. Name it MEMbr01 and select the node where you want the member to reside. Leave the server template to default. By using a default template you create a "plain" WebSphere Application Server server, rather than a WebSphere Process Server-capable server (Figure 28).

      Figure 28. Selecting the correct server template
      Figure 28. Selecting the correct server template

      Here's a tip: You can create more members now or later, but just create one member to start with, and then grow the cluster as appropriate by adding members later on.

    3. Click Next. Review the summary and finish the creation process.

  2. Create the Provider and Data Source for the Messaging engines:

    1. Create the JDBC Provider at the cluster level. Navigate to Resources => JDBC Providers and then select a cluster scope (Figure 29).

      Figure 29. Creating the data source for messaging at the cluster level
      Figure 29. Creating the data source for messaging at the cluster level
    2. Click New to create a new provider. Select the database type, provider type, and implementation (Figure 30).

      Figure 30. Configuring the data source for messaging
      Figure 30. Configuring the data source for messaging
    3. Complete the creation of the profile.

  3. For DB2, update the DB2 driver environment variables. In general, it is best to create these variables at the node level, because that gives you the freedom to customize each node independently. In our scenario, the systems we are using are identical in terms of disk drives, directory structure, database driver location, and so on. Therefore, we can define those variables at the cell level, so that every box in the cell will inherit those definitions (Figure 31).

    Figure 31. Defining WebSphere variables for the data source
    Figure 31. Defining WebSphere variables for the data source

    If you define variables at the cell level, make sure that you don't also have the same variables defined at the node or server level. Lower level definitions override the cell level definitions.

  4. Create the actual data source. Since we have decided to create a single database for all the messaging engines, and multiple schemas within it, one data source is sufficient.

    1. Choose Resources => JDBC Providers and click your JDBC provider (the one we created at the cluster level).

    2. Click Data Sources and then New.

    3. Specify a name for the data source, such as MEDataSource, and a JNDI name, such as jdbc/MEDataSource.

    4. Specify a component-managed authentication alias (here we have used a single alias for all the databases, which assumes that all databases share the same authentication information). For DB2, also specify the Database name and Port number. Set the Driver type to 4 in this case; it should be set to be consistent with the driver used in the provider (Figure 32).

      Figure 32. Configuring security and database settings for the messaging data source
      Figure 32. Configuring security and database settings for the messaging data source
    5. Click OK and save.

  5. Add the cluster to the SI Buses that are used by SCA:

    1. Navigate to Service Integration => Buses.

    2. Select SCA.SYSTEM.<cell name>.Bus, then Bus members.

      Figure 33. Adding a member to the SCA SYSTEM SI Bus
      Figure 33. Adding a member to the SCA SYSTEM SI Bus
    3. Click Add. Specify the MECluster as the member and the JNDI name of the data source that you created (Figure 34).

      Figure 34. Selecting the MECluster as an SI Bus member
      Figure 34. Selecting the MECluster as an SI Bus member
    4. Click Next and confirm the addition of the cluster to the bus.

  6. The previous step created a messaging engine on the cluster. Now, change it so that it points to the correct database schema within the database:

    1. Choose Service Integration => Buses and click the SCA.SYSTEM.<cell name>.Bus.

    2. Select Messaging Engines , then the messaging engine on the resulting screen, and then click Data Store.

  7. Change the schema to the correct schema (in this case, it is MESCASYS, because that's the schema you created for the SCA system bus). Uncheck Create tables, because you've already created the schema and tables -- but automatic creation could occur if appropriate. Click OK and save. (Figure 35)

    Figure 35. Setting the correct database schema name for the messaging
    Figure 35. Setting the correct database schema name for the messaging
  8. Repeat the previous steps for the SCA.APPLICATION.<cell name>.Bus, making sure you specify a different schema name, such as MESCAAPP, for the data store definition.

  9. Start the cluster to make sure that the messaging engines work (they should be in the "started" state, also check the SystemOut.log for the servers in the cluster).

Should you ever need to re-create a bus, make sure that you drop and re-create the schema, or you will have trouble when the messaging engine starts up, because of existing data in the database schema itself.


Create the WebSphere Process Server cluster

Now that the messaging infrastructure is in place, we can create the WebSphere Process Server cluster in which the WebSphere Process Server applications can be installed.

  1. To create the cluster:

    1. Select Servers => Clusters and then New. For Step 1, assign a name to your cluster.

    2. Create one member of type defaultProcessServer (make sure that you change the default to avoid creating a cluster that doesn't support WebSphere Process Server applications). Also, make sure that you select a node where WebSphere Process Server is installed. In our scenario, you created a profile for WebSphere Process Server only on isswNode01.

      Figure 36. Creating the cluster for the WebSphere Process Server applications
      Figure 36. Creating the cluster for the WebSphere Process Server applications
    3. Review the summary and complete the creation of the cluster.

  2. Configure the cluster for SCA support. A WebSphere Process Server cluster needs to be further configured to point to the SI Buses for SCA, Business Rules Management installation, and so on.

    1. Navigate to Server => Clusters and click the WebSphere Process Server cluster we just created.

    2. Select Advanced Configuration.

    3. On the next screen, check Install Business Rules Manager. For the time being, leave the JNDI name of the CEI emitter factory to the default.

    4. Specify that you want to host SCA application by selecting Remote Destination Location and specifying the Messaging Engine cluster (MECluster). See Figure 37 to see how your screen should look.

      Figure 37. Options for WPS cluster configuration
      Figure 37. Options for WPS cluster configuration
    5. Click OK and save your work. The cluster is now configured to host SCA applications.

  3. Every SCA enabled server or cluster needs a Destination on the system SCA SI Bus for failed events.

    1. Navigate to Service Integration =>Buses.

    2. Select the SCA.SYSTEM.<cell name>.Bus.

    3. Select Destinations. Make sure that there is a destination named WBI.FailedEvent.<your WPS cluster name>. If you can't see that destination, you need to create it by selecting New ,then Queue, thencreate a destination called WBI.FailedEvent.<your cluster name>.

      Figure 38. Creating the failed event SI Bus destination
      Figure 38. Creating the failed event SI Bus destination
    4. Save your work.


Create the JMS resources for process container and human tasks

The business process container and the human task container require a set of queue connection factories, queues, and activation specifications to work. These resources need to exist on a messaging infrastructure. You can either use WebSphere MQ as an external JMS provider, or the WebSphere default messaging and SI Bus support. We are using the latter.

There are two possible approaches:

  • Use wizards. You can rely on the business process container and human task container install wizards to create the JMS resources and physical destinations. If you choose this approach, you can skip this section and jump to the next section.

  • Create them in advance and then reuse them as you install the business process and human task containers. This gives you full control over the names of the resources you are creating, but also forces you to perform several configuration steps.

For demonstration purposes, we will use the manual creation approach:

  1. Create an SI Bus for the business process and human task containers:

    1. Choose Service Integration => Buses.

    2. Click New.

    3. Name your bus and specify the authentication aliases for secure connectivity. You can use the same alias used for SCA.

      Figure 39. Creating the SI Bus for Process Choreography
      Figure 39. Creating the SI Bus for Process Choreography

      Warning: If you select a name other than BPC.<your cell name>.Bus, you can run into issues. The BPC and HTM installation wizards create a new bus and new resources if they don't find a bus by that name. If you want a different name, you can still follow the steps in this article and then remove the redundant definitions.

  2. Add the messaging engine cluster to the bus and set the database schema to the correct value:

    1. Select Service integration => Buses.

    2. Open the bus you just created and click Bus Members. Click Add and add the messaging engine cluster to the bus. This operation creates another messaging engine.

    3. On the same bus, click Messaging Engines => MECluster.000-BPC.isswCell01.Bus => Data Store.

    4. Set the schema to MEBPE and uncheck Create Tables.

    5. Click OK and save the changes.

      Figure 40. Defining the Data store for the Process Choreography SI Bus
      Figure 40. Defining the Data store for the Process Choreography SI Bus
    6. Restart the messaging engine cluster and make sure the new messaging engine is started.

  3. Create the physical queues for BPC and HTM. You need six queue destinations on the bus you just created:

    1. Navigate to Service integration => Buses and click the bus for BPC and HTM.

    2. Click Destinations.

    3. Click New. Select Queue on the subsequent screen.

    4. Click Next. Specify BPEApiQueue_WPSCluster for the identifier. It's a good idea to name the physical resources using a reference to the server or cluster, which is going to be using those destinations. This technique allows you to avoid name conflicts in case you configure multiple independent Business Process containers on different clusters or servers. If you let the wizard create the resources for you, this naming convention is automatically adopted.

    5. Click Next. Select the messaging engine member MECluster.

    6. Click Next and confirm the creation.

    7. Repeat the steps for the following queues:

      • BPEIntQueue_WPSCluster
      • BPERetQueue_WPSCluster
      • BPEHldQueue_WPSCluster
      • HMTIntQueue_WPSCluster
      • HTMHldQueue_WPSCluster.
    8. o When finished, you should see the following destinations on the BPC.<your cell name>.Bus (Figure 41).

      Figure 41: Destinations Created on the Choreography SI Bus
      Figure 41: Destinations Created on the Choreography SI Bus
    9. Save all the changes.

  4. Create the JMS Resources for the business process container and the human task container. Now that we have the destinations and the bus, you can create the connection factories, the JMS queues, and the activation specifications.

    1. Go to Resources => Default messaging and set the scope to the cluster level. Select WPSCluster (not the MECluster) for the scope.

      Figure 42. Selecting the WPSCluster for the JMS destinations
      Figure 42. Selecting the WPSCluster for the JMS destinations
    2. Create the connection factories first. Select JMS Queue Connection Factories and click New.

    3. Set the name to BPECF and JNDI name to jms/BPECF.

    4. Set the bus name to BPC.<your cell name>.Bus (or whatever bus name you created for BPC and HTM).

      Figure 43. Creating a JMS Connection Factory for Process Choreography
      Figure 43. Creating a JMS Connection Factory for Process Choreography
    5. Set the Component-managed authentication alias to the J2C authentication alias we used to authenticate to the databse, WPSDBAlias.

      Figure 44. Setting the authentication parameters for the JMS Connection Factory
      Figure 44. Setting the authentication parameters for the JMS Connection Factory
    6. Click OK.

    7. Repeat the steps for BPECFC (jndi: jms/BPECFC) and HTMCF (jndi: jms/HTMCF).

    8. You should now have three queue connection factories (Figure 45).

      Figure 45. The three JMS Connection Factories you created
      Figure 45. The three JMS Connection Factories you created
    9. Save your work.

    10. Create the JMS queues. Choose Resources => Default Messaging again and make sure the scope is still at the WPSCluster level. Let's start with the business process container queues.

    11. Select JMS queue under Destinations.

    12. Click New. Specify BPEApiQueue for the name and jms/BPEApiQueue for the JNDI name.

    13. Select the bus we created from the pull down list, and then select the corresponding destination, BPEApiQueue_WPSCluster.

      Figure 46. Creating a JMS Queue for Process Choreography
      Figure 46. Creating a JMS Queue for Process Choreography
    14. Click OK.

    15. Repeat these steps for the remaining queues, selecting the matching physical destination from the bus. When you are finished, the queues shown in Figure 47 are configured.

      Figure 47. All the Queues needed for Process Choreography and Human Tasks
      Figure 47. All the Queues needed for Process Choreography and Human Tasks
    16. Save your configuration.

    17. Create the activation specifications. There are two activation specifications needed for the flow manager and one for the human tasks. Navigate again to Resources => JMS Providers => Default Messaging and make sure that the scope is set to the WPSCluster level.

    18. Click JMS Activation Specification.

    19. Click New and specify BPEApiActivationSpec for the name, and eis/BPEApiActivationSpec for the JNDI name.

    20. Set the Destination type to Queue and the Destination JNDI name to jms/BPEApiQueue. Select the bus where you want the activation specification created (it has to be the BPC.<your cell name>.Bus.) Select the authentication alias for your messaging engine.

      Figure 48. Creating the activation specification for Process Choreography
      Figure 48. Creating the activation specification for Process Choreography
    21. Click OK.

    22. Use similar steps to create two more activation specs: BPEInternalActicationSpec and HTMInternalActivationSpec.

    23. Save your configuration.


Business process container setup

You are now ready to set up the business process container using the wizard.

  1. Run the business process container installation wizard:

    1. Select Servers => Clusters => WPSCluster => Business Process Container => Business Process Container Installation Wizard.

    2. On the first page, select the database type. Make sure that you select Type 4 for the driver and that you change the database name to your actual database name (in our example: ISSWBPE). Click Next.

      Figure 50. Database settings for the BPC container
      Figure 50. Database settings for the BPC container
    3. Leave the JMS provider to the Default messaging provider and specify user ID and password to access it. Also specify a valid WebSphere user ID and password for the JMS API. Specify a valid WebSphere group or user for the Administrator role and for the System Monitor role (BPCsetup02). Notice that the Queue Manager Name is irrelevant if you use the SI Bus.

      Figure 51. JMS settings for the BPC Container
      Figure 51. JMS settings for the BPC Container
  2. Click Next. On the subsequent screen, check Select Existing JMS Resources. Make sure that the resources are selected correctly. Also check the checkbox that relates to installing the BPC Web client, and for now leave the checkboxes for CEI unchecked.

    Figure 52. Selecting JMS resources for Process Choreography
    Figure 52. Selecting JMS resources for Process Choreography

    Click Next, review the summary, and click Finish. Make sure that the install process completes successfully (Figure 53), and save your changes.

    Figure 53. Successful completion of the BPC container install
    Figure 53. Successful completion of the BPC container install
  3. Make a small configuration change to the data source that was created automatically for the business process container messaging engine. The wizard assumes that messaging engine for the process container uses the same database as the business process container itself. Since this is not the case in our scenario (we have a separate DB for all the messaging engines), we need to update the reference to the database:

    1. In the console, navigate to Resources => JDBC Providers and then set the scope to the Cell level.

    2. Click WPSDefaultDatasource_DB2_Universal, then Data sources.

    3. Click BPC.T40MC2Cell02.Bus_WPSCluster_datasource. Scroll down to the section where the database is defined and set it to be ISSWMEDB (Figure 54).

      Figure 54. Configuring the correct database for Process Choreographer messaging
      Figure 54. Configuring the correct database for Process Choreographer messaging
    4. Click OK and save the configuration changes.


Human task manager setup

Now, install the human task manager. The procedure is very similar as for the business process container:

  1. Run the HTM install wizard by selecting Servers => Clusters => WPSCluster => Human Task Container => Human Task Container => Human Task Container Installation Wizard.

  2. For Step 1, specify the same JMS options as for the Business Process Container <HTMSetup01>.

    Figure 55. Database configuration for the Human Task
    Figure 55. Database configuration for the Human Task
  3. For Step 2, select Existing JMS Resources and whether you want a mail session.

  4. For Step 3, review the summary and click Finish.

  5. Make sure the HTM container installed completes successfully and save your configuration.


Configure CEI on the cluster

Now, you can configure and install the Common Event Infrastructure on the WebSphere Process Server cluster. In this simple topology, we install CEI on the same cluster as the rest of the WebSphere Process Server components. In larger environments, you can configure the CEI in a separate cluster.

  1. Create the Common Event Infrastructure database by generating some scripts and executing them:

    1. Open a command prompt and change your current directory to: <WPS_INSTALL_ROOT>\profiles\<WPS Profile directory>\event\dbconfig.

    2. There are a many scripts here; use the script that corresponds to DB2 since you are using DB2. Make a copy of the DB2ResponseFile.txt file. Open the file with a text editor to make the following changes (you can have different values for the driver location, type, and DB Instance Port, depending on how DB2 is configured):

      CLUSTER_NAME=WPSCluster
      SCOPE=cluster
      DB_NAME=EVENT
      DB_NODE_NAME=<your remote DB node name>
      JDBC_CLASSPATH="<WPS_INSTALL_ROOT>\universalDriver_wbi\lib"
      UNIVERSAL_JDBC_CLASSPATH="<WPS_INSTALL_ROOT>\\universalDriver_wbi\lib"
      JDBC_DRIVER_TYPE=4
      DB_HOST_NAME=<your DB host name>
      DB_INSTANCE_PORT=50000
      EXECUTE_SCRIPTS=NO
    3. Save the changes.

    4. Run config_event_database.bat DB2ResponseFile.txt

      .
    5. The previous command creates a set of scripts to create the CEI database in: <WPS_INSTALL_ROOT>\profiles\<WPS profile directory>\event\dbscripts\db2. Make that directory your current directory.

    6. Run this command to create the CEI database on a remote system: cr_event_db2.bat client <user> <password>.

    7. Once you have successfully created the database, change directories to: WPS_INSTALL_ROOT>\profiles\<WPS profile directory>\event\dsscripts\db2. Find the script to configure the data sources you need for CEI here.

    8. Run this command to create the datasources on the cluster (make sure that the deployment manager is up and running first): cr_db2_jdbc_provider cluster WPSCluster.

    9. Provide the database user ID and password when prompted to do so and let the script complete.

  2. Verify that the command created the JDBC provider and the data sources.

    1. Open the administrative console.

    2. Navigate to Resources => JDBC Providers and set the scope to the cluster level. You should see a JDBC provider called "Event DB2 JDBC Provider." (Figure 56)

      Figure 56. The JDBC provider defined for CEI
      Figure 56. The JDBC provider defined for CEI
    3. Click the provider and then click Data Sources. You should see two data sources (Figure 57).

      Figure 57. Data sources created for CEI
      Figure 57. Data sources created for CEI
  3. Now, you can install the two CEI applications: one implements the CEI engine and the other allows asynchronous publishing of CEI events.

    1. Start by installing the CEI engine. Make <WPS_INSTALL_ROOT>\profiles\<WPS Profile>\event\application your current directory.

    2. Issue the following command, on a single line (this command works for DB2 V8.2 and assuming your WPS cluster is called WPSCluster):

      ..\..\bin\wsadmin –p profile event-profile.jacl -f 
      event-application.jacl -action install 
      -earfile event-application.ear 
      -backendid DB2UDBNT_V82_1 -cluster WPSCluster
    3. Now, let's install the CEI message application for asynchronous publishing. In the same directory, issue the following command (on a single line):

      ..\..\bin\wsadmin -profile event-profile.jacl -f 
      default-event-message.jacl -action install 
      -earfile event-message.ear  -cluster WPSCluster
  4. These two commands not only installed the applications, but they also created some resource definitions for CEI. In a clustered environment, you need to modify some of those definitions.

    1. Open the console and choose Service Integration => Buses. Notice that you created a new bus called, CommonEventInfrastructure_Bus. (Figure 58)

      Figure 58. The SI Bus created for CEI
      Figure 58. The SI Bus created for CEI
    2. Click that bus. On the subsequent screen, click Bus Members. Notice that the WPSCluster has been added as a bus member. This is not what we want, since we have defined a separate cluster to host the messaging engines (Figure 59).

      Figure 59. Incorrect Bus member for the CEI SI Bus
      Figure 59. Incorrect Bus member for the CEI SI Bus
    3. Select that bus member and click Remove.

    4. Subsequently, click Add and add the MECluster to the bus. Specify jdbc/MEDataSource for the data source JNDI name, as we previously did for other buses. (Figure 60)

      Figure 60. Setting the correct cluster as a member of the CEI SI Bus
      Figure 60. Setting the correct cluster as a member of the CEI SI Bus
    5. Click Next and then Finish.

    6. Click CommonEventInfrastructure_Bus and click Messaging Engines.

    7. Click the MECluster.000-CommonEventInfrastructure_Bus and click Data Store.

    8. Specify MECEI for the schema, select the WPSDBAlias for the authentication alias, and uncheck Create Tables (Figure 61).

      Figure 61. Setting the Correct Database Schema for the CEI Messaging Engine
      Figure 61. Setting the Correct Database Schema for the CEI Messaging Engine
    9. Click OK and save your changes.

  5. You now need to create two physical destinations on the bus for CEI.

    1. Go back to Service Integration => Buses and click the CommonEventInfrastructure_Bus.

    2. Click Destinations.

    3. Click New and select Queue. Click Next.

    4. Type CommonEventInfrastructureQueueDestination for the Identifier. Click Next.

    5. Make sure that the bus member is set to be MECluster.

    6. Click Next and then Finish.

    7. Click New again, and this time select Topic space.

    8. Type CommonEventInfrastructure_AllEventsTopic for the Identifier. Click Next.

    9. Click Next and Finish.

    10. Save your configuration changes.

  6. Recirculate the clusters to allow them to pick up the new changes.

    1. Stop WPSCluster and MECluster.

    2. Start MECluster.

    3. Start WPSCluster.

    4. Ensure that the application servers start cleanly by checking the SystemOut.log files of both.


Verify the clusters functionality and growing the clusters

Initially, we recommend you create clusters with a single cluster member. By doing so, you will be able to more easily test the configuration while you are building the clusters. Once you are comfortable with your configuration, you will be able to add new members to the clusters.

To test the configuration, you may need a simple test application that includes some SCA functionality, and a long running business practice with staff so that you can test most of the components that we configured and installed.

Before you proceed with application installation and testing, follow these steps:

  1. Verify that you don't have duplicate SI Bus destinations and JMS resources. As we mentioned before, if you called your business process container and HTM SI Bus something other than business process container.<cell name>.Bus, you are going to get a new bus and a redundant set of JMS resources.

    1. Choose Service Integration => Buses. If you notice that BPC.<cell name>.Bus was created in addition to the bus you created, remove it.

    2. Choose JMS Providers => Default messaging => JMS queue connection factory and verify that you only have one set of connection factories for BPC and HTM (BPECF, BPECFC, HTMCF).

    3. Choose the JMS queue and verify that you only have the queues you created. Remove any redundant queue definition.

    4. Navigate to your activation specification and make sure there aren't any redundant definitions.

  2. Verify the creation of the data source for business process and human task containers.

    1. Choose Resources => JDBC Providers.

    2. Set the scope to the cell level.

    3. Verify that there is a data source for the BPE container. Write down its JNDI name (it should be jdbc/BPEDB_<your cluster name>; for example: jdbc/BPEDB_WPSCluster). You will be using this name when installing applications on the WPSCluster that contain BPEL processes or human tasks.

  3. Start the Messaging engine cluster and subsequently the WebSphere Process Server cluster. Verify that each cluster starts correctly by checking the SystemOut.log of each member.

  4. Verify that the Business Process Explorer application works correctly. The application can be reached at: http://<WPS Cluster hostname>:port number/bpc. For instance, if you are testing on your local machine, and you have the default HTTP transports for your WebSphere Process Server cluster member, you can try: http://localhost:9080/bpc.

    If the transport is not the default transport (for example, the WebSphere Process Server cluster member's Web container works on port 9081) you need to add a virtual host to your cell for that port:

    1. Open the console, and choose Environment => Virtual Hosts.

    2. Click default_host and then Host Aliases.

    3. Click New.

    4. Specify * for the host name and the port number (for example, 9081) for the Port (Figure 62).

      Figure 62. Configuring the Virtual Host for the WebSphere Process Server cluster
      Figure 62. Configuring the Virtual Host for the WebSphere Process Server cluster
    5. Click OK and save.

    6. Restart the WebSphere Process Server cluster and re-test the business process explorer.

  5. Install and test your application. Once you have verified that the BPC explorer works correctly, you are in a good position to install and run the test applications you may have prepared. You can use the BPC explorer to kick off any business processes in those applications.

  6. To grow the clusters, proceed as follows:

    1. Using the console, choose Servers => Clusters and then click the name of the cluster you want to grow (for example, MECluster).

    2. Click Cluster Members and then New.

    3. Specify a member name, and select a node to host the new cluster member.

    4. Click Apply if you want to add more members. At the end, click Next.

    5. Check the summary and click Finish.

    6. Repeat these steps for the WebSphere Process Server cluster.

    7. Add the virtual hosts corresponding to the additional cluster members (each cluster member will listen on its own port for HTTP and HTTPs. You need to add virtual hosts definitions to match those ports.)

    8. Restart the clusters.

    9. Ensure that each cluster member starts correctly by checking the SystemOut.log of each server.

  7. Test the applications on the final configuration. Now that the final configuration is up and running, you can perform a series of tests to ensure that the applications are running on each cluster member. A first, very simple test is to ensure that the business process container explorer is running on each member.

    Subsequently you can run a long running BPEL process, and verify that is executed correctly. If the process is made of a large number of activities, you should verify that those activities are executed on different cluster members. Testing failover is also important. You can make sure that long running processes continue to run if the server where they were initiated is stopped abruptly -- and you can verify the failover behavior of short-running processes.


More complex topologies

The topology of this lessons I learned as is a example of a clustered topology for WebSphere Process Server. However, there are many variations that you need to keep in mind to address larger environments. Without discussing them in detail, here are two additional topological themes.

Partitioned destinations

Discussed earlier, partitioning the destinations provides the advantage of having multiple instances of the Messaging Engines for SCA, BPC, and CEI active at the same time. As we mentioned, this option is a way to increase the scalability of the Messaging Engine.

In general, it is best to avoid this approach, since it carries significant limitations. In summary:

  • There is no guarantee of true balancing across partitions. For this reason, you can end up with significantly under-utilized or over-utilized partitions.

  • Stranded (or orphan) messages are also an issue with certain configurations that include partitioned destinations. This problem occurs when a cluster member fails, and the messaging engine fails over to another cluster member. In some circumstances, there are no Message-driven Beans that are ready to process messages from the failed over messaging engine, and any message on the affected partition will be unprocessed.

  • Ordering is also a known issue with partitioned destinations. There is no guarantee that messages can be processed in the order they were sent to the destination. For some applications, this can be a major problem.

Separating the "administrative" applications from the SCA. BPEL, and HTM applications.

WebSphere Process Server includes a number of ancillary applications to help with administration and troubleshooting of WebSphere Process Server applications. Those applications are (in addition to the administrative console, which is not specific to WebSphere Process Server):

  • Failed Event Manager
  • Relationship Manager
  • Common Base Event Browser
  • Business Rules Manager
  • Business Process Observer
  • Business Process Choreographer Explorer.

The first three applications in this list are hosted by the Deployment manager. Making them highly available implies making the Deployment manager highly available. This can be done using known techniques, such as placing the Deployment Manager on its own system, and protecting that system with an operating system failover mechanism (for example, HACMP for AIX).

The Business Rules Manager and the Business Process Observer can be isolated, if needed, on their own cluster and decoupled from the WebSphere Process Server applications. Of course, this choice makes the topology and administration more complex, but it also allows for a better scalability of the overall solution.

You might also consider isolating the Common Event Infrastructure server on a separate cluster, and have WebSphere Process Server applications publish their events asynchronously to a "remote" CEI, rather than hosting the CEI on the same cluster.


Conclusion

WebSphere Process Server offers a solid basis for customers to build a scalable and highly available application environment. This paper is a starting point for those customers and practitioners who need to set up a basic clustered WebSphere Process Server installation.

You've learned how to set up a reasonably simple and yet robust clustered topology. However, you must keep in mind that there are a wide variety of possible topological solutions for WebSphere Process Server, which ensure different levels of scalability and availability.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into WebSphere on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=WebSphere
ArticleID=108811
ArticleTitle=IBM WebSphere Developer Technical Journal: Basic steps for clustering WebSphere Process Server
publish-date=04192006