Installing and configuring an IBM Operational Decision Management golden topology

Learn how to choose and configure deployment topologies for IBM® Operational Decision Management (IBM ODM) in distributed environments. This article explains the essential concepts needed to understand highly available and scalable WebSphere Application Server Network Deployment environments, then introduces the IBM ODM server components and explains the characteristics or constraints of those components that affect deployment decisions. This content is part of the IBM Business Process Management Journal.


Pierre Feillet (, Software Architect, IBM China

Pierre Feillet photoPierre Feillet is the runtime product architect for IBM Operational Decision Manager at the IBM France Lab. He has 20 years of experience in business rules, building decision management products in both IBM and ILOG labs. He has shaped the rule server runtime in IBM ODM, WebSphere ODM, and JRules since release 4.5. Pierre is responsible for decision management architecture and topologies in distributed, cloud, System Z, and mobile environments.

James Withers (, Professional Services, SwiftKey

James Withers photoJames Withers was a member of the Quality Assurance team for IBM Operational Decision Management at the time of the writing of this document. He has experience in system configurations on both distributed and System z platforms. James holds a PhD in Neuroinformatics, and has research interests in machine learning, image processing, and accessible user interface devices. He currently works in professional services at SwiftKey™.

Jose De Freitas (, Advisory Software Engineer, IBM

Jose De Freitas photoJose de Freitas is a member of the Quality Assurance team for IBM Operational Decision Management. In this capacity, he is involved in the configuration and testing of ODM deployment environments. Jose previously worked on IBM Business Process Management (BPM), in both lab and customer-facing roles, where his responsibilities included the installation and configuration of IBM BPM and IBM Business Monitor clustered topologies.

22 May 2013

Also available in Chinese


This article guides you through the installation and configuration of IBM Operational Decision Management (ODM) V8 deployment environments. It describes concepts that are essential to the understanding of highly available and scalable WebSphere Application Server Network Deployment environments, and introduces the ODM server components and explains the characteristics or constraints of those components that affect deployment decisions in a distributed environment..

In addition to discussing the layout of those components within a WebSphere Application Server cell topology, we'll also look at the different deployment environments that are needed to take an ODM solution from the development stage to the production stage. Finally, we'll take you through the step-by-step installation and configuration of an ODM reference or "golden" topology.

Try the Business Rules service

Spend less time recoding and testing when the business policy changes. The Business Rules service from Bluemix minimizes your code changes by keeping business logic separate from application logic. Try it for free!

In this article, we'll use the term ODM to refer to both IBM Operational Decision Manager V8.0.1 and the previous WebSphere Operational Decision Manager V8.0.0.1, except in those contexts where a reference to a specific version of the product is required.


IBM WebSphere Application Server, Network Deployment V8.0 [8] is a robust, highly available, Java™ EE compatible middleware environment. It provides a platform for hosting and managing enterprise applications, and for configuring and accessing their related resources. WebSphere Application Server is one of the possible deployment environments for IBM ODM [4].

A WebSphere Application Server configuration consisting of related application servers, HTTP servers, nodes, clusters, cells, and other resources (such as databases) is known as a topology. A golden topology is a topology recommended for a class of scenarios, which has been designed to implement best practices and explains the terminology and decisions made.

In order to describe the golden topologies for ODM, we'll introduce the topology-related terms used throughout this article. For further information on these terms, please refer to [1] and [2].

The ODM topologies use two types of server:

  • Application servers are Java virtual machines (JVMs) provided by WebSphere Application Server that run applications and provide services.
  • Web servers route requests for content from web applications, and can call applications running on an application server. The IBM HTTP Server for WebSphere Application Server [10] plug-in provides this functionality and can also implement a high-availability policy and workload balancing.

When an environment has multiple servers, a logical organization can be configured to simplify management and to more efficiently access resources. A node is a group of servers and a managed node has a node agent that enables management of its servers. The configuration of a node is recorded in its profile and a predefined update to a profile is known as augmentation. A node group inside a cell is a collection of nodes that have the same available resources, stand-alone server configuration, and cluster configuration.

A cell is an administrative domain of managed nodes. The process of adding a node into a cell is known as federation. The deployment manager is a single special node inside the cell that provides a central point of administrative control for all parts of a cell. When a change is made to the configuration of the cell using the deployment manager, a synchronization process must take place with all the node agents.

A cluster is a grouping of managed servers inside a cell that often spans different managed nodes. Clusters enable workload balancing for your applications in order to improve performance or to provide a highly available environment. Application deployment to a cluster is simplified because it presents a single logical deployment target.

Servers within a cluster are called cluster members, and those that are not in a cluster are called stand-alone servers. The types of applications that are typically deployed to stand-alone servers include those that do not require, or do not benefit substantially from workload balancing or high-availability policies. This situation may be a result of the style in which the application is accessed, its relative importance compared to other applications, or limitations in the application design.

Deploying a resource or application to a WebSphere Application Server environment occurs at a certain scope. The scope is a hierarchical concept; for example, selecting a cell scope would deploy a resource or application on the cluster members and stand-alone servers within that cell. The deployment scope is important because there is an advantage in deploying related applications on the same server due to the reduced cost of local EJB calls compared to cross-server calls [3].

The deployment scope is useful for defining scalability within a WebSphere Application server cell. This concept is usually defined along two axes:

  • The horizontal scalability of a cell allows for additional processing capacity through the replication of nodes, in order to increase the number of members in each cluster. This type of scalability assumes deployments are performed at the cluster scope. Workload balancing can employ this extra processing capacity to create highly available environments or to improve application performance.
  • The vertical scalability of a cell is dependent on the processing capacity of individual nodes. This type of scalability determines the amount of processing capacity available on each node and it may limit how many resources, cluster members, stand-alone servers and their constituent applications can be made available on a node.

In order to tackle these scalability issues, we introduce the concept of a cell topology. A cell topology can usually be succinctly described using a cell topology diagram (two of which are presented later in this article) and it describes a scalable configuration inside a single cell, in terms of:

  • The deployment of applications inside clusters and stand-alone servers
  • The membership of servers inside clusters
  • The identification of stand-alone servers
  • The mapping of stand-alone servers and clusters to nodes
  • The access to resources at different scopes

More complex systems may require several cells linked by access to resources, by a high availability policy, or by business processes. For example, applications could be developed and tested in distinct cells until they are ready for deployment into a production cell. We will introduce an environments topology where each component cell in this diagram references a particular cell topology.

In order to document and explain the ODM golden topologies, we will now introduce the components of ODM and the constraints that affect their deployment.

Description of ODM components and constraints

ODM is IBM's implementation of a business rules and events management and processing system. For a full description of ODM's components, refer to the ODM Information Center [4]. Access to ODM components can be controlled with different user roles stored within a federated repository. User access management is only discussed briefly in this article; for more detailed information refer to [6] and [7].

ODM has two main components:

  • Decision Center (DC) enables business rule design and lifecycle management and is supported by a social media framework.
  • Decision Server (DS) enables business rules and events configuration and processing through its subcomponents Decision Server Rules and Decision Server Events.

Decision Server Rules contains the functionality for business rules configuration and processing. Its Rule Execution Server (RES) is a business rule execution platform that enables the processing of RuleApps and their constituent Rulesets by its rule engine. The RES eXecution Unit (XU) provides Java EE Connector Architecture (JCA 1.5) and Service Provider Interface (SPI) system-level contracts that allow the server to connect to the rule engine. The XU resource adapter archive (RAR) can be deployed into WebSphere Application Server at the node scope or packaged within a rules application.

In this article, we assume that the XU RAR is deployed in WebSphere Application Server. Installation and deployment of the XU RAR occurs at the node scope and hence it can be accessed by any cluster member or stand-alone server in a node; it becomes available for multiple applications and the size of its JCA connection pool can be controlled. The XU is managed in the cell using a Management Bean (MBean) for which either a classpath reference to the MBean JAR must exist at the server scope where it is deployed, or the MBean JAR must be copied into the WebSphere Application Server lib directory. We do not consider the case where the XU runs independently of the management model. As of ODM V8.0.1, in addition to the JMX API, a Representational State Transfer (REST) API has also been made available to facilitate remote management.

The XU has several supporting enterprise application archive (EAR) files. Two EAR files are required for processing Message Driven Bean (MDB) and Enterprise Java Bean (EJB) rule sessions. Another EAR file is required for processing Hosted Transparent Decision Services (HTDS), which exposes the rule execution capabilities of Decision Server Rules as web services. All three EAR files must be collocated with the XU.

Figure 1. Decision Server Rules components
Decision Server Rules components

The RES console allows administration tasks to be performed and may be deployed as an enterprise application at the stand-alone server scope. The RES console should not experience heavy usage and usually only one rule administrator will be accessing it at a time. This console is not required to be running for the XUs to operate and hence should not be considered an important point of failure.

Another approach, which is recommended since the release of the ODM V8.0.1 PureApplication System patterns, places the RES console in a cluster, thus ensuring high availability for the component. This approach has the drawback that two administrators working simultaneously on two RES web console instances may not see exactly the same data until a manual synchronization has been performed.

Another component of Decision Server is the Scenario Service Provider (SSP), which allows testing and simulation of RuleApps, and implements a Decision Validation Service (DVS). When using a remote SSP, this enterprise application must be collocated on a server where the XU RAR is available.

Decision Server Events has a different set of components and relationships to Decision Server Rules. The events runtime is an enterprise application used for business events coordination, monitoring and execution. It is deployed at either the stand-alone server or cluster scopes, and it cannot be packaged with an application. Assuming that your rule and event projects reference each other, the performance advantages of local calls suggest that the events runtime and the XU should be available in the same cluster. We also recommend that each events runtime deployment has only one event project assigned to it.

The events runtime provides an administrative console as part of the enterprise application. Since the console is not generally expected to be in heavy use or used by more than one user at a time, it does not impose additional constraints on the deployment and usage of the event runtime.

Event detection and actions are performed using technology connectors such as file, user console and JDBC connectors. The particular technology connectors required for events applications are detected when event projects are uploaded or at server start-up, and they can be deployed as enterprise applications at the same scope as the relevant events runtime.

The events runtime also requires a messaging infrastructure within a cell. There is great flexibility as to how such a messaging infrastructure can be configured and we defer the choice of the messaging provider (for example, WebSphere MQ) and leave some of the implementation options for vertical and horizontal scalability to the choice of the reader. The messaging infrastructure can also be employed for MDB sessions used in rule applications. In general, the performance benefits of local calls mean that the infrastructure should be made available in the same cluster as the events runtime and the XU.

When invoking a ruleset from an event, the project rule runtime and event runtimes have to be colocated on the same servers.

The final components of Decision Server are Rule Designer and Event Designer. These are Eclipse-based integrated development environments (IDEs) that provide a user interface for rules and events project construction and deployment to the Rule Execution Server and events runtime. Rule Designer and Event Designer are hosted and run on a desktop computer and, although they are connected to the server-based components, they are external to the golden topology.

Decision Center has fewer subcomponents than Decision Server. Its Business Console and Enterprise Console are deployed together as a single enterprise application at the stand-alone server or cluster scopes. The Business Console enables collaboration between business users while authoring business rules, whereas the Enterprise Console has many more functions, including running simulations, deploying artifacts to Decision Server, and repository control. The Enterprise Console also connects to Rule Solutions for Office, which enables editing of business rules in Microsoft Office documents. These consoles are expected to have heavy continuous use and hence will benefit from deployment at the cluster scope.

In ODM V8.0.0.1 IBM Business Space, which comprises a set of enterprise applications deployed at the stand-alone server or cluster scopes, was required for Decision Center and Events widgets. Business Space widgets and an accompanying event test enterprise application were deployed on the same server as an events runtime. As of ODM V8.0.1 the Decision Center and Events widgets have been replaced by a JEE application and Business Space is no longer required.

ODM requires access to several databases. Decision Server Rules uses the following databases:

  • Decision Server Rules (RES) database, which stores executable rule artifacts structured as RuleApps, Rulesets and related Java eXecutable Object Models (XOMs).
  • Decision Warehouse database, which stores the rule execution traces.

Decision Center has its own database to persist business rule artifacts and DVS reports. In addition, Decision Server Events and Business Space (in ODM V8.0.0.1) have their own databases.

The configuration of those databases, in terms of scalability and high availability, falls outside the scope of this article. The datasources used for connecting to those databases should be deployed at the node scope so that resources relevant to the node (such as the location on disk of the database connectivity JARs) can be correctly configured, and also to ensure breadth of access to the datasources from ODM components and other enterprise applications.

Description and implementation of golden topologies

The following sections describe the ODM golden topologies and explain the motivations behind the design decisions:

  • The Operational Decision Manager topology (which was called the Decision Management topology in ODM V8.0.0.1) provides full capabilities for rule and event authoring, test and execution within a single cell.
  • The Decision Server topology allows rule and event test and execution within a single cell, without the authoring capabilities of the Decision Manager topology.
  • These recommended topologies are leveraged to build environments in phased application deployment scenarios – from development phases through to the test and production phases.

The topologies have been designed to maximize performance, to provide workload balancing, and to offer all the capabilities of ODM. The Decision Manager topology is described first and in the most detail, and is accompanied by practical step-by-step deployment instructions later in the article.

The Operational Decision Manager golden topology

The Operational Decision Manager golden topology, shown in Figure 2, provides full capabilities for rule and event authoring, test and execution. For a topology focused on test and execution only, please refer to the Decision Server topology.

Figure 2. Operational Decision Manager golden topology
Operational Decision Manager golden topology

In this topology, each node (except the deployment manager) contains a Decision Center cluster and a Decision Server cluster. Since the RES console is not a critical component of the topology and it does not have any scalability requirements, it runs in a stand-alone server on one of the nodes. Workload balancing is achieved through the use of IBM HTTP Server and the Web Server plug-in for WebSphere Application Server. The implementation of fail-over or other high availability policies is not covered in this article.

The Decision Server cluster is designed to exploit the efficiency of local EJB calls within the same server in order to maximize performance of rule and event execution, testing and simulation. Therefore, all the rule and event testing, simulation and execution is performed within this cluster. For the same reason, the messaging infrastructure for event processing (which is also usable by MDB rule sessions) is also present there, along with your applications that use rules and events. The constituent ODM components in this cluster include the SSP, EJB, MDB, HTDS, event runtime and various event connector enterprise applications.

The XU RAR and the datasources are deployed at the node scope. These resources are useful for applications in any server inside a node in the Decision Manager topology. Furthermore, the deployment of the XU RAR at the node scope allows efficient local calls to be made from the event runtime and other Decision Server cluster enterprise applications.

The Decision Center consoles do not have the same performance and scalability requirements as the components mentioned in the Decision Server cluster. Therefore their enterprise application has been moved to a separate Decision Center cluster. This approach allows some flexibility in the design of the topology; if you don't require authoring capabilities or you wish to minimize hardware resource usage, then you may want to consider removing this cluster (as proposed in the Decision Server topology).

Since it is highly likely that multiple users will be accessing services and applications concurrently we recommend that the concurrency options for the XU, event runtime, messaging infrastructure, datasources and HTDS (through the Ruleset property ruleset.xmlDocumentDriverPool.maxSize) be adjusted in order to provide good performance.

Decision Server golden topology

Figure 3. Decision Server golden topology
Decision Server golden topology

The Decision Server topology, shown in Figure 3, is similar to the Operational Decision Manager topology, but does not contain a Decision Center cluster. Therefore, the topology allows rule and event testing and execution but is without authoring capabilities (except those provided by Rule Designer and Event Designer, which run separately on a desktop computer).

Environments topology

Isolation of your production environment from testing and other environments in separate cells (as described in [5]) has several benefits, such as providing specialist test environments and helping to prevent catastrophic failure. The environments topology is a phased deployment topology, consisting of five cells mapped to successive deployment phases:

  1. Development, where rule and event applications are first developed.
  2. Testing, where simulations and integration testing are performed
  3. Staging, where business users can edit rule and event projects
  4. Pre-production, which has exactly the same configuration as the production phase
  5. Production, where rules and event processing takes place on production data

This five-phase example is provided as a reference for the discussion that follows. You may choose, for example, only to have three phases or cells, or the development environment could be a stand-alone server. Our goal is to help you choose an environments topology that is suitable for your specific requirements and resource constraints.

Figure 4 exemplifies one of the extremes in an environments topology spectrum that ranges from centralized management to maximum isolation.

Figure 4. Centralized decision management
Centralized decision management

The advantages and disadvantages of the centralized management approach shown above can be summarized as follows:


  • One single point of business authoring to deploy on all environments
  • Decision Center branch and merge can be leveraged to deploy executable rules on any selected Decision Server
  • High availability with clustered Decision Center and Decision Server environments
  • Isolation between authoring and execution workloads


  • Doesn't allow for customization of Decision Center specifically per phase
  • The shared Decision Center repository becomes a single point of failure for authoring and deployment, unless the database is configured for high availability
  • Requires access rights tuning to scope authorized actions
  • No replica Decision Manager environment for testing maintenance activities and other high risk changes

The opposite extreme, shown in Figure 5, consists of isolated management and execution cells for all stages.

Figure 5. Staged decision management
Staged decision management


  • Full isolation between development lifecycle stages. Full ODM runtime in a single cell
  • Isolation of authoring and execution by phase and cell
  • The Decision Center may be customized differently in each cell, including security customization
  • High availability with clustered Decision Centers and Decision Servers


  • More JVMs and Decision Center databases to provision and manage
  • Decision Center repository content has to be synchronized across cells from the Development cell to the Production cell

To exemplify the thought process behind the choice of an environments topology, let's consider a scenario where the first three phase cells have an Operational Decision Manager topology, and the final two phases (pre-production and production) have the same topology – either an Operational Decision Manager topology or a Decision Server topology. The choice of cell topology for the final two phases depends on whether you require the authoring capabilities of Decision Center in your production environment. The pre-production phase cell provides a final chance to identify undesired behavior of your rules and events applications before they reach the production phase.

In the first three phases, as an application progresses through the phases it is promoted from the Decision Center in one phase cell to the next using Rule Designer, Event Designer or the Decision Center Enterprise Console. Within each phase cell, the application is deployed from Decision Center to Decision Server. The isolation of each phase allows for phase-specific customization and tests to be easily added to applications.

If the pre-production phase and production phases also have Operational Decision Manager topologies, this successive promotion of rules and events projects between Decision Center clusters continues. User access management may be simplified when the phases are isolated, and it may also be easier to test ODM updates (such as fixpacks) when there is no dependency on a shared Decision Center. However, this implementation is more costly in terms of the hardware resources required for multiple Decision Center clusters. Furthermore, the management, customization and synchronization of those isolated clusters may involve additional costs.

Alternatively, if the pre-production and production phases have Decision Server topologies, then the Decision Center cluster of the Staging phase cell is effectively shared between the Decision Server clusters of the final three phases. As an application progresses through these phases, it is deployed from Decision Center to the Decision Server in each phase cell. The main advantage of this implementation is that the management of RuleApp versions between phases is simplified through centralization.

The limitations of this implementation include increased complexity of user access management due to deploying applications in multiple cells, in particular the sensitive production phase cell. The Staging phase cell requires careful management and possibly extra resources for its shared Decision Center cluster so that Decision Center remains available and responsive. Also, since its Decision Center database becomes a single point of failure, care should be taken to ensure its availability (through replication or other means).

Deploying the Decision Manager topology in ODM V8

This section describes in the steps to set up a Decision Manager Golden topology by taking advantage of the profile augmentations and scripts included in WebSphere ODM V8.0.0.1. If you ignore the Business Space specific tasks, this is equally applicable to IBM ODM V8.0.1.

The main steps are:

  1. Create and augment profiles
  2. Create and configure the clusters
  3. Install and configure the IBM HTTP Server and the WebSphere Application Server web server plugin.


Before you create the deployment topology make sure that the following software is installed in each node:

  • WebSphere Application Server Network Deployment V8.0.0.3 (with Business Space)
  • WebSphere eXtreme Scale V7.1.1.1 (if you are using the Event Server component)
  • WebSphere ODM Decision Center
  • WebSphere ODM Decision Server (with the Event Server component, if required)

Create and augment profiles

Most users of ODM will silently create and augment profiles using the scripts provided with the product. For this article, however, we've chosen to use the Profile Management Tool (PMT) to do this because, for first-time users, it provides visibility into all the required steps and input parameters.

Create the deployment manager profile

Note: You may skip this section if you have experience creating deployment manager profiles.

For each cell, there is one deployment manager node and one deployment manager profile. You create the deployment manager profile as follows:

  1. Launch the Profile Management Tool (PMT) and select Create.
  2. Select WebSphere Application server => Management, as shown in Figure 6, then click Next.
    Figure 6. Select deployment environment
    Deployment environment dialog
  3. Select Deployment Manager as the server type, as shown in Figure 7.
    Figure 7. Select the server type
    Server type selection
  4. Click Next, then select the Typical profile creation option and click Next. (unless you want to change default names and ports, in which case you would select Advanced profile creation.
  5. On the Administrative Security page, select Enable administrative security and provide your security credentials, as shown in Figure 8.
    Figure 8. Enable administrative security
    Administrative Security dialog
  6. Click Next. You'll be presented with a profile creation summary similar to the one shown in Figure 9. Click Create.
    Figure 9. Profile Creation summary
    Profile Creation summary page
  7. Once the profile is created, you can select First steps to verify the installation and start the deployment manager. You need to start the deployment manager before you federate other nodes in the cell into this deployment manager.

Create the custom node profiles

In our topology, the deployment manager is on a separate machine from the cluster member nodes, so we won't create a custom profile on the deployment machine. For every other node in the cell, we'll create a custom profile. A custom profile is used to federate a node to the deployment manager; it enables the deployment manager to create and manage application servers on that node via the node agent. To create the custom profiles, do the following:

  1. Create a custom node by launching the Profile Management Tool. Select Create, then select WebSphere Application Server => Custom profile, as shown in Figure 10.
    Figure 10. Create a custom profile
    Create custom profile selection
  2. Click Next and select Typical profile creation.
  3. On the next page, specify the parameters that are required to federate this node to the deployment manager: deployment manager host name, deployment manager soap port, and deployment manager authentication credentials, as shown in Figure 11, then click Next. Make sure that the deployment manager is started. Note: it is a generally a good practice to federate a node only after profile augmentation, but in this case there are no issues with federating the node at this stage.
    Figure 11. Specify node federation parameters
    Node federation dialog
  4. On the next page, click Create.
  5. Repeat steps 1-4 for all the nodes that will be managed by the cell deployment manager.

Augment the deployment manager and custom profiles

The next step is to augment the deployment manager profile and all the custom profiles with the ODM profiles. If you're using Business Space, you'll also need to augment the deployment manager profile with Business Space.

The ODM golden topology requires that you augment the deployment manager profile with Decision Center, Decision Server and Decision Server Events. The custom profiles are only augmented with Decision Server Events, as shown in Table 1.

ODM profiles Augments deployment manager Augments custom profiles
Decision Center x
Decision Server x
Decision Server Events xx
Business Space xx

To augment the profiles, do the following:

  1. Stop the deployment manager.
  2. Augment the deployment manager profile with Decision Center, by selecting the appropriate augment option in the Augment Selection dialog, as shown in Figure 12, then click Next.
    Figure 12. Select an augment option
    Augment Selection dialog
  3. Specify the ODM installation location (in our case, /opt/IBM/WODM80).
  4. Click Next and, on the next page, click Augment.
  5. In a similar way, augment the deployment manager profile with Decision Server.
  6. Augment the deployment manager profile with Decision Server Events as follows:
    1. Select the augment and provide the ODM install location.
    2. On the Database Configuration page, shown in Figure 13, supply the required database parameters, and click Next. It is recommended that you test the database connection before you proceed to the next step.
      Figure 13. Specify database parameters
      Database Configuration dialog
    3. On the next page, select the messaging provider (we accepted the default), then click Augment. This also performs the eXtreme Scale augmentation. You should end up with four deployment manager profile augmentations, as shown in Figure 14.
      Figure 14. Augmented profiles
      Augmented profile list
  7. Augment the deployment manager profile with Business Space.
  8. Augment the custom nodes with Decision Server Events (this also performs the eXtreme Scale augmentation).

Create the Decision Manager clusters

Next you need to create the Decision Manager clusters: Decision Center and Decision Server.

Create the Decision Center cluster

The quickest way to create and configure the Decision Center cluster is to run the Configure DC Cluster script (, which is in the deployment manager's bin directory. In addition to creating the cluster, the script performs the following actions:

  • Starts the deployment manager server if it's not already started.
  • Installs the Decision Center application (teamserver) at the cluster level. Users are mapped to application groups when an application is deployed.
  • Configures the JDBC provider and the data source at the node level*.
  • Configures security.
  • Creates the rtsAdmin, rtsInstaller, rtsUser1, and rtsConfig users.
  • Configures users and groups.
  • Maps users and groups to roles.
  • Starts the cluster, servers, and applications.

* The script provided with this release only creates the definitions for one node, which is identified by the parameter -targetNodeName. You may run the script again for each additional node by changing the target node name, or you may configure additional nodes manually.

Complete the following steps to create the cluster using the script:

  1. Before running the script, you need to open the WAS_Installdir/profiles/Dmgr01/bin/rules/ file and provide the cluster configuration properties, as shown in the example below.
    # Cluster base configuration
    # Database configuration
    #	Supported database type:
    #		- DB2
    #		- Oracle
    #		- MSSQL
  2. Set WODM_HOME (for example, by issuing the command export WODM_HOME=/opt/IBM/WODM80).
  3. Go to the WAS_Installdir/profiles/Dmgr01/bin directory and run the configureDCCluster script, as shown in the example below:
    ./ -dmgrAdminUsername admin -dmgrAdminPassword admin
    -clusterPropertiesFile ./rules/
    -dmgrHostName -dmgrPort 8879
    -targetNodeName ilogds02Node02

    When the script completes, you should see the following message: [wsadmin] GBRPC0028I: The cluster is up and running!

  4. Select WebSphere application server clusters => DecisionCenterCluster => Cluster members and select New to create new cluster members. In our case, we'll be creating two new members called dm.dc02 and dm.dc03, one on the ilogds02 node and the other on the ilogds03 node, as shown Figure 15.
    Figure 15. Create new cluster members
    Create Additional Cluster Members dialog

    (See a larger version of Figure 15.)

  5. After creating the cluster members, make sure that the node agents are started, then start the decision center cluster.

Create the Decision Server cluster

To create the decision server cluster, do the following:

  1. Open the WAS_Installdir/profiles/Dmgr01/bin/rules/ file and provide the decision server cluster configuration properties, as shown in Figure 17. Note that we are creating an extra server called RulesMgrSrv for the rules execution server console.
    # Cluster base configuration
    # Database configuration
    #	Supported database type:
    #		- DB2
    #		- Oracle
    #		- MSSQL
  2. Run the configureDSCluster script using a command similar to the following:
    ./ -dmgrAdminUsername admin -dmgrAdminPassword admin 
    -clusterPropertiesFile ./rules/ 
    -dmgrHostName -dmgrPort 8879  
    -targetNodeName ilogds03Node02

    This script does the following:

    • Installs the JDBC provider, Execution Unit (XU) JCA connector, and data source at the node level.
    • Creates a server that is not part of any cluster and installs the Rule Execution Server console on this server.
    • Configures security.
    • Creates the resAdmin, resDeployer, resMonitor users and respective groups.
    • Deploys the HTDS and Scenario Service Provider (SSP) to the cluster member. Users are mapped to application groups when an application is deployed.
    • Starts the deployment manager server if it is not already started.
    • Starts the cluster, servers, and applications.
  3. After running the script, you may manually configure the XU adapter and the datasource in the additional nodes. Alternatively you may run the script for each different target node. The script recognizes the actions that have already been performed and only creates the datasource and XU definitions on each node. Should you make a mistake, you can uninstall all the cluster definitions by running the same command with the -uninstall parameter.
  4. Finally, in a manner similar to that used for the Decision Center cluster, create the Decision Server cluster members. At this point, the cell topology should resemble what is shown in Figure 16.
    Figure 16. Resulting local topology
    Tree showing resulting local topology

This is effectively the Operational Decision Manager golden topology that described earlier, but you still need to complete the configuration of Decision Server Events.

Configure Decision Server events

The Decision Server Events components are installed and configured in the Decision Server cluster. For each cluster member (application server) in this cluster, do the following:

  1. Create a new wbe.home Java custom property (select Servers => Server Types => WebSphere application servers => server-name => Java and Process Management => Process Definition => Java Virtual Machine => Custom properties) that points to the ODM installation directory (for example, /opt/IBM/WODM80).
  2. Increase the JVM heapsize. At a minimum (assuming a 64-bit machine), set the initial heap size to 768 and the maximum heap size to 1024.
  3. Select WebSphere application server clusters => DecisionServerCluster => Cluster members => dm.ds02 => Container services => Startup beans service, then select Enable service at server startup.
  4. You now need to configure the decision server cluster as a member of the service integration bus. To do this, you first need to create a database and data source for the message store as follows:
    1. Create a database, for example DSEME.
    2. Create a cluster-level JDBC provider in the decision server cluster.
    3. Create a datasource (use a JNDI name like jdbc/eventsme).
  5. Now perform the following configuration steps for the decision server events bus:
    1. Select to Service integration => Buses => WbeBus => Bus members and click Add. Select Cluster and then select DecisionServeCluster (from the drop-down list).
    2. On the next page, select the messaging engine policy type. The following example uses High Availability, as shown in Figure 17, but Scalability with high availability is the recommended choice for scalable production environments.
      Figure 17. Select policy type
      Policy type selection
    3. On the next page, select Data store for message persistence.
    4. Click Next, then select the name of the messaging engine. Specify the JNDI name (such as jdbc/eventsme) and select an appropriate authentication alias. Make sure Create tables is selected and click Next.
    5. Change the heap sizes if required and then click Finish.
  6. The next step is to perform the required WbeBus and JMS configurations. For instructions on how to do this, refer to the Configuring a silver topology cluster in the Information Center. Perform steps 7 to 13 of that process. In addition, create the following topic space destinations on the WbeBus: Default.Topic.Space, WbeHistoryTopicSpace and WbeTopicSpace.
  7. For production environments you should also consider configuring a minimum of three Websphere eXtreme Scale catalog servers: one in the deployment manager node, and the other two in two node agent nodes (refer to [9]).
  8. Because connectors will be deployed to a cluster, you'll need to select Resource environment entries => WbeSrv01 => Custom properties and create the following entries:
    • with a value of cluster
    • with a value of DecisionServerCluster (which is the name of the cluster to which the connector applications will be deployed)

Install and configure the IBM HTTP Server and the web server plug-in for WebSphere Application Server

Our topology includes an IBM HTTP Server with a web server plug-in for WebSphere Application Server, which is used to route and load balance incoming HTTP requests. In production systems, the one or more HTTP servers would be installed on separate machines. For simplicity, we'll use only one instance of the HTTP server installed on the same physical server as the deployment manager. Since there is no managed node on the deployment manager machine, the plug-in is configured in the same manner as it would be if the HTTP server was installed on a remote machine. To set up this configuration, do the following:

  1. Install the IBM HTTP Server.
  2. Install the web server plug-in for WebSphere Application Server.
  3. Install the WebSphere Customization Toolbox.

Note: The above components are part of the WebSphere Application Server Supplements package and may be installed using IBM Installation Manager.

  1. Start the WebSphere Customization Toolbox and select Web Server Plug-ins Configuration Tool.
  2. Click Add to add the location of your installed plug-in.
  3. On the Web Server Plug-in Configurations tab, click Create.
  4. Select IBM HTTP Server V8 and click Next.
  5. Select 64-bit and click Next.
  6. Provide the web server configuration file location and the port (in our case, /opt/IBM/HTTPServer/conf/httpd.conf and 80, respectively).
  7. Click Next and set up an administration server for the web server (this is optional).
  8. Provide a unique web server definition name (we accepted the default of webserver1) an click Next.
  9. For the reasons explained earlier, select Remote supply and point to the deployment manager machine.
  10. Click Next and select the deployment manager profile.
  11. Click Next, then Configure and Finish.
  12. Copy the generated file from the plug-in's bin directory to the application server's bin directory and run this command: Dmgr01 admin admin (assuming the deployment manager profile name is Dmgr01 and the WebSphere Application Server user id and password are admin).
  13. From the administration console of the deployment manager, select System administration => Save Changes to Master Repository => Synchronize changes with Nodes and click Save.
  14. Select Servers => Server Types => Web servers. Select the web server, then click Generate Plug-in.
  15. Once the plug-in configuration file (plugin-cfg.xml) is successfully generated, click Propagate Plug-in to propagate the configuration file to the plug-in's configuration directory (in our case, /opt/IBM/WebSphere/Plugins/config/webserver1)

The final step is to ensure that all web modules are also mapped to the web server.

You should now be able to point to the <webserver host name>/teamserver to access the Decision Center Enterprise console. Likewise, point to <webserver host name>/res for the RES management console.

We'll leave the configuration of Business Space as an exercise to users of ODM V8.0.0.1 who may have a need for the ODM business space widgets. Please note that as of ODM V8.0.1, Business Space is no longer required by ODM.

Applicability of the golden topologies

The recommendations and instructions applicable to clustered deployments provided in this article are only relevant to ODM and WebSphere Application Server ND V8.x. For information about other versions of ODM and WebSphere Application Server, refer to their product Information Centers. Furthermore, this article does not consider any SupportPac artifacts; please refer to the documentation supplied with the SupportPac for relevant information for clustered deployments.

The golden topologies presented in this article are suitable for scenarios in which the full range of rule and event execution, test and simulation functionality offered by ODM is required. Not all of the functionality of the Decision Server cluster may be relevant to your particular requirements. For example, your projects may not involve any event processing, or the production phase of your environments topology may not require SSP. We recommend that each topology be customized to remove unneeded components and thus minimize resource usage, especially where performance and reliability are key factors.

There are some special limitations for Business Space and Decision Server Events as regards clustered deployments and test artifacts, which are covered in the following section.

Limitations of Business Space and Decision Server events

There is a limitation that each events runtime deployment can have only a single event project associated with it. We also recommend that only one event runtime deployment be performed within a cell. If you have several event projects, these will either need to be combined, or each event project will require a separate cell with the event runtime deployed.

Using the event tester enterprise application and Business Space widgets is not recommended in a clustered environment. Similarly, clustered deployment of the events runtime on WebSphere Application Server for z/OS cluster is not supported, and it must be deployed on a single server.

If these features are required, then a separate stand-alone server within the topologies could have these components and their dependencies deployed to it. For example, moving the event test enterprise application to a stand-alone server would require the event runtime to move, too, because they must be collocated and we recommend only one event application be deployed in a cell. If components are moved from the Decision Server cluster to a stand-alone server, some consideration must be given to which applications and resources can still make local calls, and the consequent performance penalties. The concerns for messaging are discussed in more detail in the following section.

Scalability, high-availability and performance

Several details of the implementation of the topologies presented in this article have been left to the choice of the reader. In the deployment instructions for the Decision Manager topology, the workload balancing strategy, as well as the design of the messaging infrastructure, are geared towards high availability. For further information on high availability and scalability strategies, and the trade-offs between resource isolation and the performance advantages of making local calls, refer to [8].

Performance monitoring

Monitoring the performance and reliability of your environment is extremely important in determining whether measures to increase horizontal or vertical scaling need to be taken. Some common bottlenecks include CPU, memory, disk access and other resource access (such as database or queue). Guidance for monitoring a WebSphere Application Server environment is outside the scope of this article, but can be performed using the WebSphere performance monitoring infrastructure and the IBM Tivoli Composite Application Manager family of products.


In this article you've learned about the Operational Decision Manager components and how they are installed and configured as WebSphere Application Server ND deployment topologies for high availability and horizontal scalability. You've also gained insight into the different phases of a decision management solution and the type of deployment environments that may be required to support each phase. Finally, you walked through the steps to install and configure an Operational Decision Manager golden topology.


Thanks to our colleague Karri Carlson-Neumann for her contribution to the discussions that led to many of the ideas underpinning the content of this article. Special thanks also to Peter Johnson for his views, reviews and continued support.



developerWorks: Sign in

Required fields are indicated with an asterisk (*).

Need an IBM ID?
Forgot your IBM ID?

Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.


All information submitted is secure.

Dig deeper into Business process management on developerWorks

Zone=Business process management, WebSphere
ArticleTitle=Installing and configuring an IBM Operational Decision Management golden topology