Best practices for designing and implementing decision services, Part 2: Integrating IBM Business Process Manager and IBM Operational Decision Management

Part 1 of this series addressed some best practices for designing and implementing a decision service using IBM® Business Process Manager Advanced and WebSphere® Operational Decision Management. This article describes different out-of-the-box BPM and ODM integration capabilities and recommends an approach that provides better performance and flexibility for a long-term production solution. This content is part of the IBM Business Process Management Journal.

Share:

Jerome Boyer (boyerje@us.ibm.com), Senior Technical Staff Member, IBM

Jerome Boyer photoJerome Boyer is an IBM expert on Enterprise Business Rule Management Systems in BPM, SOA and Complex Event Processing deployments. As an STSM, Jerome is the lead BRMS BPM solution architect in IBM Software Services for WebSphere (ISSW). Jerome is the author of "Agile Business Rule Development", published by Springer, 2011.



12 December 2012

Also available in Chinese

Overview

Part 1 of this series emphasized that the decision service is part of the SOA service layer, decoupled from BPM, using a coarse-grained interface with limited payload. The goals are reusability, and to avoid having BPM consumers sending all the data to the decision services. Information models are different between BPM and BRMS; BPM as workflow orchestration layer does not carry the same structure and definition as the service it consumes. The decision service is a reusable service not designed for one consumer. Still BPM is a consumer, and therefore integration between the IBM Business Process Manager (IBM BPM) and IBM Operational Decision Manager (IBM ODM) is needed. This article describes the different product integration capabilities and provides some criteria on when to use them. Architects need to assess their functional and non-functional requirements to select the best solution.

Since June 2010, IBM BPM and IBM ODM (formerly ILOG JRules) have had out-of-the-box integration capabilities that demonstrate smooth integration. However, there are some constraints to consider before jumping too quickly on these integrations. The most important constraint is linked to the information model needed for the two products. As seen in Part 1, the business process manages variables to progress information between process activities and to present or gather data to and from humans using coaches. Most of the time, the decision service needs a different data model, adapted from the domain model using an object-oriented analysis approach, adding utility methods and algorithm operations, resulting in a more complex data graph than the one used by BPM. The goal is to have better rule execution and improve the rule authoring experience. An IBM BPM process application uses JavaScript variables to carry data between process activities, a BPM Advance Integration Service (AIS) uses a service data object to manage data, and ODM will consume XML natively or, even better, POJO classes. Data mapping and technology mapping are needed.

During the last three years we've observed stronger requirements to externalize rule processing from processes, with the two most common demands being data validation and rich decision activities. Data validation rules are often buried inside screen logic, but are also needed to validate all data gathered in the process before, for example, saving it to a backend system. Rich decision activities include every human task in which knowledge and subject matter experts make decisions on data. It's common to consider process flow routing rules (gateway nodes) as the ones to externalize into BRMS, but in fact this is a very rare use case. Most of those routing rules are simple, represent a limited number of rules, and are very static by nature. Implementing such routing logic using JavaScript in IBM BPM is far simpler and quicker. For data validation, the requirements can be decomposed into the following two:

  • Validate data as soon as it is are entered in forms to get error messages in the same coach.
  • Validate data at the end of all data gathering activities, once the user submits the business data to the backend or to the downstream process. A list of errors is reported back to the users so they can fix the data.

Figure 1 shows an example of a business process definition that illustrates data entry activity followed by a formal data validation step, modeled as nested service.

Figure 1. Human service with data validation call
Human service with data validation call

For data validation in a coach, the current common approach is to use JavaScript inside the coach to implement structural rules, or business rules that constraint the data model or the user interface. The following examples could be implemented by adding logic to widget controls in coach.

  • If the type of loss is a car accident, then one of the insured cars needs to be selected and the accident location address is mandatory.
  • If the type of loss is fire, then at least one insured property needs to be selected.

Within the Coach Designer, a developer can control the element visibility using conditions on other fields. Figure 2 defines a condition on the claim type as accident to present a field.

Figure 2. Control visibility using other field value
Control visibility using other field value

Looking at these rules above, it's obvious that they also need to be part of a final validation step in the process. External implementation of the data validation rules can be reused by other parts of the application or by other consumers. Therefore, such constraint rules may be coded into two places: the user interface and BRMS. In fact, there are solutions where BRMS may support the two cases, which we'll cover in Part 3 of this series. Also, because these are executable rules, it's important to not just look at the conditions of the rules but also the action part and the context of execution. The action in the UI is to display an error message at the field level, which causes the validate data decision service rejects the claim and accumulates all the issues found in a list.

The examples presented in this article leverage IBM BPM V8.0.1 Advanced and IBM ODM V8.0.1, but most of the approaches presented will work with the 7.5 versions of both products.

When designing a business process model definition in Process Designer, a process developer may use different integration capabilities to interact with outbound services such as decision services. The current capabilities are web service integration, Java integration, Advanced Integration Services, and BPM JRules decision services.


Basic claim processing scenario

To illustrate the different examples presented in this article, we'll use a basic claim processing application for car insurance. The process starts with a set of coaches to enter claim data, then a first system lane activity validates the claim and reports any errors to the claim processor (process participant), and the process progresses to eligibility and adjudication steps that are also decision services. The happy path continues to claim payment and filing a claim. We need to consider at least two decision services: one for data validation and one to adjudicate the claim. The rule harvesting leads us to identify around 100 rules in the validation and at least 500 in the adjudication steps. The development of the business process definition and the decision services are done in parallel and most of the time involve different developer skill sets: business process analysts define the BPMN process definition, while rule analysts, who are often knowledge engineers well versed in knowledge acquisition techniques and knowledge representation, work on the rule discovery, analysis, and implementation. Also, on the business side, the process owner often differs from the business rules owners; for example, the claim processing process owner does not own the adjudication rules, the adjucation department does, and the same goes for the risk assessment rules.

The parallel development of the two teams needs to be synchronized. The information model definition and the decision service specifications represent the two main elements constraining the integration between the products.


The JRules decision service

IBM Process Designer includes a special service, in the Decision item of the process application library, to call a hosted transparent decision service (HTDS). HTDS is a predefined web application that exposes rulesets with WSDL, and offers a set of services like decision service meta-data access and runtime statistics using JMX MBean protocol. The URL matches the RuleApp and ruleset path: DecisionService/ws/ClaimProcessingRuleApp/ValidateClaimRules?wsdl. The HTDS is using a local eXecution Unit (XU), the container of rule engines, and is deployed on its own server. The server physical resource can be adapted according to the performance goals and scalability needs. HTDS supports only remote calls using SOAP over HTTP protocol. Up to JRules V7.1.1, HTDS only supported XSD, and a basic Java™ model, but starting with Decision Server V7.5, it is now possible to package the Java project supporting the domain model into an archive file (JAR), and deploy it along with the RuleApp, so the HTDS can support a rich Java model. Java model processing has better performance than XML and helps developers to add more flexibility for rule authoring.

Figure 3. HTDS using XU is consumable by different applications, such as a BPM process app
HTDS using XU is consumable by different applications, such as a BPM process app

(See a larger version of Figure 3.)

Prior to IBM WebSphere ODM V7.5, rulesets got their Java XOM definition from the client application (decision service implementation) class loader. With ODM V7.5 and beyond, Java XOM is stored to the Rule Execution Server persistence layer and is a manageable artifact in the same way as a RuleApp. From Rule Designer, you can select the rule project and deploy the XOM to a rule execution server instance. This is a very important enhancement, but it does not remove the problem of exposing a ruleset signature to the SOA service. Architects need to spend some time deciding whether this service has to be exposed as-is or through a service integration component. The granularity of the service design is one of the major considerations for good SOA adoption. It is very common to offer multiple different operations for the same ruleset. Those operations are defined as part of the service specification of the decision service. HTDS does not support this architecture.

When using HTDS, it's important to adapt the settings to control the generated WSDL, especially the namespaces of the different XSDs. You can set those options in the RES console, as shown in Figure 4.

Figure 4. Ensure coherence between namespaces
Ensure coherence between namespaces

When a BPM process app is the consumer, the first thing to do is to configure the RES server URL in Process Designer by adding a server definition in the Process App Settings, by selecting ILOG Rules Server as the type and specifying the RES console server URL in the default field (without the RES web context name), as shown in Figure 5.

Figure 5. Set the RES server URL
Set the RES server URL

The next step is to use the Decision item in the process app library to add a decision service client (for example, ValidateClaimDS). In the canvas, drag and drop a JRules decision service and connect it to the start and end nodes, as shown in Figure 6. In the Implementation tab, specify the server configuration you defined previously, and click Connect. When the connection is successful, all the RuleApps defined in the RESDB are exposed in a single-select list. Select the RuleApp and then the ruleset that the decision service client will call.

Figure 6. Defining a decision service client in Process Designer
Defining a decision service client in Process Designer

Note: This feature works only when RES is deployed on WebSphere Application Server.

The next step is to get the definition of the expected Rule Business Object model (refer to Part 1 for more information about the different models). The type definition may conflict with existing business objects defined in the process app or within linked toolkits. In fact, it's common to have business object definitions in a process application focusing on supporting coach implementation and carrying data between process activities, which are different from business objects defined by importing a WSDL. A business process developer adapts this model in parallel to a rule developer working on his or her Rule Business Object (RBO) model. It's challenging to use a unique model, but possible. In most implementations, mapping is needed. The input and output parameters of the new service in Process Designer use the process variables as defined in the process application and not the ones imported because this local service has to be consumed by human services or a BPD. Figure 7 illustrates that the private variables are using the types of the service imported, whereas public variables are using the type of the process application. Private variables are used as parameters of HTDS.

Figure 7. External parameter using business object, internal private using imported model
External parameter using business object, internal private using imported model

(See a larger version of Figure 7.)

It's interesting to note that the parameters of the JRules decision service include a string to log decision id and use wrappers on top of the ruleset parameter. The decision id is used to correlate the process instance with the decision id used in the ODM Decision Warehouse, so a given claim business user can get the executed rule names. This capability helps to answer the question: what rules were executed for this process instance? I prefer using the claimNumber or policyNumber as decision ids, as those identifiers are business keys. When a call center representative receives a call from the claimant, he or she uses the claim number to get information about the claim, and could also get the rules executed during the adjudication step as they are loaded from the decision warehouse and the decision center.

The data model mapping is done inside the implementation of the system lane service by using server side script, so the diagram looks like Figure 8.

Figure 8. Mappings in the decision service client implementation
Mappings in the decision service client implementation

Each server script is doing the mapping using JavaScript. This is cumbersome code and may become a real challenge as preparing data for rule processing may involve loading from external data sources. As a concrete example, the coach may support entering the policy number so the ClaimBO has an attribute policyNumber, which is a string, whereas the ClaimRBO in the rule processing has an InsurancePolicy reference, which includes coverage, deductibles, insured properties, and so on. Therefore, the step to prepare the RBO does not just map between two complex types but also loads data from the system of record or from a data access layer. In fact, this should be the job of an integration layer. IBM BPM Advanced supports efficient integration logic using BPEL or mediation flows.

You'll want to use this integration capability for a quick demonstration and proof of concept, or when the BPM process app and system lane activity are able to send all the data to the rule processing. Architects and service designers prefer to control their business service interface definitions, and to implement their data access.


Single model development

One of the problems with external services is doing information mapping in IBM BPM. In fact it is possible to limit the number of data mappings by defining the information model upfront in the early project iterations. Not every type can be defined but a lot of types are well known and defined: most of BPM and ODM deployments are done to re-engineer existing business applications, so the data model can be partially derived from them. Types like Address, Customer, Person, LegalEntity, and Coverage are well defined. Types like Claim, InsurancePolicy, and so on may not be fully defined when developing the process application; therefore their definitions may change over the project implementation. The well-defined types can be described in existing XSDs quickly importable from BPM toolkit. From the XSD, you can develop WSDLs that could be imported inside Process Designer to generate the business objects as part of a toolkit. The WSDL does not need to be online, a file URL works just as well (for example, //localhost:/C:/workspaces/abrd-claim-ws/claim-model/src/wsdl/ClaimManagerModule.wsdl).

As seen in Part 1, selecting UML => XSD => java bean is a very efficient approach to designing and generating code for the information model. The development process may look like that shown in Figure 9.

Figure 9. Tools and elements to define business objects
Tools and elements to define business objects

From the XSD, using the XJC JAXB tool you can generate annotated Java classes and load them in Rule Designer to define the BOM and rule vocabulary. This is one approach, and should not be considered the only path. Having a pure Java class is recommended when classes need business logic. The decision service interface is defined as a Java interface using JAXWS annotation, and WSDLs are generated. The WSDL can be imported into Process Designer to create the data definition (business objects) inside a reusable toolkit. This approach can be used as early as possible before doing a lot of coach development. When some business objects are not fully defined or have very different semantics between the service producer and the BPM workflow, then each developer can develop their own model, and the BPM developer needs to implement mapping between the business objects and the Service Exposition Model.


Java implementation approach

The most flexible approach is to implement the decision service as a Java module, and it is no more complex than other approach. As described in the Implementing decision services in Part 1, the approach is to design a module responsible for managing the main business entity (for example, the claim managed by ClaimModule). The definition of the business interface defines the decision service operations (for example, validateClaim and adjudicateClaim), as shown in the following listing.

@WebService (name = "ClaimManagerService", targetNamespace = "http://abrd.org/claim/serv")
@SOAPBinding(style = SOAPBinding.Style.DOCUMENT, use = SOAPBinding.Use.LITERAL, 
parameterStyle = SOAPBinding.ParameterStyle.WRAPPED)
public interface ClaimManager {
	public Claim getClaimByNumber(String claimNumber);
	
	public void updateClaim(Claim claim);

	public void saveClaim(Claim claim);
	
	// -- Decision service operations	
	public ValidationResult validateClaim(Claim claim);
	public ValidationResult validateClaimByNumber(String claimNumber);
	
	public ValidationResult adjudicateClaim(Claim claim);
	public ValidationResult adjudicateClaimByNumber(String claimNumber);

The implementation uses Java and the RES API (see Part 1 for the code sample). As the interface definition above shows, it's possible to offer different parameters for each operation to support different consumer models: from the ones able to send all the data to the ones sending just primary key. The implementation takes care of loading the data if needed, including manage cache, reference data, and so on.

@WebService(serviceName="ClaimManagerService",
		portName = "ClaimManagerPort",
		endpointInterface = "abrd.claim.services.ClaimManager",
		targetNamespace = "http://abrd.org/claim/serv")
public class ClaimManagerImpl implements ClaimManager{
….
public ValidationResult adjudicateClaim(Claim claim) {
		return claimProcessingDS.adjudicateClaim(claim);
	}

The packaging of the module is done as a very simple WAR with only one dependency on the jrules-res-session-java.jar and the rule business object model JAR (under WEB-INF/lib). The ant target is as follows:

<war destfile="${build.dir}/${war.name}" needxmlfile="false">
	<webinf dir="${src.webcontent}/WEB-INF" includes="*.xml" />		
	<classes dir="${build.classes}" /> 		
	<lib dir="${src.webcontent}/WEB-INF/lib">
		<include name="*.jar" />
	</lib>
	<fileset dir="${src.webcontent}">
		<include name="*.html"/>
	</fileset>
</war>

The web.xml references the service implementation Java class and the servlet name maps the service name as defined in the JAX-WS annotation. It's important to add the resource-ref element to declare the JNDI name for the XU connection factory, needed for rule execution, and the RESDB datasource, needed to load the ruleset.

<display-name>Claim Manager Module</display-name>
<servlet>
	<servlet-name>ClaimManagerService</servlet-name> 
	<servlet-class>abrd.claim.services.ClaimManagerImpl</servlet-class> 
</servlet>
<servlet-mapping>
	<servlet-name>ClaimManagerService</servlet-name> 
	<url-pattern>/serv</url-pattern> 
</servlet-mapping>
<welcome-file-list>
	<welcome-file>index.html</welcome-file>
</welcome-file-list>
<resource-ref>
	<res-ref-name>jdbc/resdatasource</res-ref-name>
	<res-type>javax.sql.DataSource</res-type>
	<res-auth>Container</res-auth>
	<res-sharing-scope>Unshareable</res-sharing-scope>
</resource-ref>
<resource-ref>
	<res-ref-name>eis/XUConnectionFactory</res-ref-name>
	<res-type>javax.resource.cci.ConnectionFactory</res-type>
	<res-auth>Application</res-auth>
	<res-sharing-scope>Unshareable</res-sharing-scope>
</resource-ref>

The ra.xml is the resource adapter descriptor, and needs to point to the RESDB datasource. This file is in the WAR under WEB-INF.

<config-property-name>persistenceType</config-property-name>
<config-property-type>java.lang.String</config-property-type>
<config-property-value>datasource</config-property-value>
</config-property>
        <config-property>
<config-property-name>persistenceProperties</config-property-name>
<config-property-type>java.lang.String</config-property-type>
<config-property-value>JNDI_NAME=jdbc/resdatasource</config-property-value>
</config-property>

Finally when deploying the WAR on WebSphere Application Server, make sure to deploy it as a web service, and link the resource reference as defined in the web.xml to the JNDI name configured in WebSphere Application Server, as shown in Figure 10.

Figure 10. Linking dependent resources
Linking dependent resources

On the BPM side, the approach is to use the web service integration. The WSDL URL has the following structure:
<hostname>:<port>/<webappcontext>/<servlet_url_mapping>/<servicename>.wsdl

The mapping between the model is still needed for the same reason as explained previously. As Figure 11 shows, you can use a simple operation giving just the claim number. The process has saved the claim to the system of record, getting the unique id, and the adjudication or validation decision service operations load the data from this backend.

Figure 11. Web service client configuration
Web service client configuration

The two integrations presented so far use the rule execution server deployed on its own remote server, as illustrated in Figure 3. This approach is the most flexible and enables developers to implement a rich set of decision service operations, increasing reusability. Data caching, data loading can be fine-tuned with the Java implementation.


SCA Integration: The Advanced Integration Service approach

An AIS is another option to enable integration between BPM and ODM. In Process Designer, the developer defines an advanced integration service using the library element Implementation > Advance integration service, as shown in Figure 12.

Figure 12. AIS in Process Designer
AIS in Process Designer

The implementation is done in IBM Integration Designer, and can use BPEL, mediation flow or Java implementation. With this integration type, you would deploy the ODM rule execution unit (XU) on the process server in the application target cluster, as shown in Figure 13. Any decision service implementation using BPEL, mediation flow, JAX-WS or HTDS, are also co-deployed on the App target cluster. The RES console is deployed within the support cluster.

Figure 13. Co-located XU
Co-located XU

When using IBM BPM, components like SCA runtime and BPEL engine are added to the AppTarget cluster (in a four cluster topology), or are deployed within a specific fifth cluster used for service modules. In the last case, ODM XU and HTDS are deployed within this service cluster. The main principal is to co-locate the rule engine pool in the same JVM as the decision service. This decision service is, in this case, an SCA component. More precisely, the service SCA module is packaged as an EAR and the decision service is a JAR file within it. As a pure Java component, it's very easy to reuse a Java decision service in other components outside of SCA.

When the XOM is Java based, the integration developer needs to map the Service Data Object, used in SCA, to the Java Beans. The Java implementation code of the decision service operation has an SDO parameter and SDO return type, as shown in below.

public DataObject adjudicateClaim(DataObject claimSdo) {
     
// ..
     Claim aClaim = convertSDOClaim(claimSdo);
// RES session api code …
AdjudicationResult result = (AdjudicationResult) outputParameters.get(“result");
DataObject response = createSDOResponse("http://model.isis.ibm.com",
     result);

You can implement your own mapping using the BOFactory API to map from a Java bean to SDO. Good knowledge of the DataObject API is necessary. Below is an example of code to map an SDO claim data object to a claim RBO and to create a resulting SDO object using BOFactory.

Claim claim = new Claim();
claim.setClaimNumber(claimBO.getString("claimNumber"));
claim.setDateOfLoss(claimBO.getDate("dateOfLoss"));
// use DAO to access the insurance policy
InsurancePolicy policy = dao.loadInsurancePolicy(claimBO.getString("policyNumber"));
claim.setPolicy(policy);
// build response from ValidationResult RBO
com.ibm.websphere.bo.BOFactory boFactory = (com.ibm.websphere.bo.BOFactory) 
ServiceManager.INSTANCE.locateService("com/ibm/websphere/bo/BOFactory");
DataObject validationResult = boFactory.create("http://abrd.org/claim/serv",
"validationResult");
validationResult.setBoolean("error", result.getError());
// ...

The Java implementation of the rule service uses the POJO RES session to interact with the rule engine. This implementation enforces using a local call to process the rules, which is by far the most efficient method. When the XOM is based on XSD, a better implementation is to use BPEL.

Figure 14 illustrates a packaging containing a decision service Java implementation, an SCA module for the claim management services, a Data Access Object (DAO) to access locally the different claim database tables and the RES XU co-located on the same node as the SCA runtime and process server (BPEL).

Figure 14. Packaging decision service java implementation as SCA module
Packaging decision service java implementation as SCA module

Part 3 of this series will cover a way to use SDO as an XOM for ODM so no mapping is needed.


Java integration approach

One of the interesting capabilities of IBM BPM is the ability to define your own Java integration. The approach is to define a Java class that supports the Java implementation of the decision service operations like validate claim and adjudicate claim, as shown below.

publicclass ClaimDecisionService {
    protected RuleProcessingImpl rp;
	
    public ClaimDecisionService(){
    	rp= new RuleProcessingImpl();
    }
    
	public TWObject validateClaim(TWObject claimBO) { …
      public TWObject adjudicateClaim(TWObject claimBO) { …

The interesting part is the parameters. Method parameters and returned types could only be basic Java wrapper classes (the one in the java.lang package), jdom.Document or jdom.Element, or BPM types such as TWObject and TWList. TWObject represents complex data types as defined in the process application or toolkit. When the process variable is a list, it is mapped as teamworks.TWList. In this method, the returned type is declared in Process Designer as ValidationResult and the input parameter as Claim complex type.

Figure 15. Parameters as complex type
Parameters as complex type

When the service is using Java XOM, the implementation class needs to do the mapping between TWObject and Java beans. The code uses the TWObjectFactory and TWObject API as shown in the code below.

TWObject result = null;
TWList issues=null;
try {
	result = TWObjectFactory.createObject();
	issues = TWObjectFactory.createList();
	result.setPropertyValue("issues",issues);
	// do mapping 
	Claim claim = new Claim();
	claim.setClaimNumber((String)claimBO.getPropertyValue("claimNumber"));
	claim.setType(ClaimType.fromValue((String)claimBO.getPropertyValue("type")));
	claim.setDateOfLoss((Date)claimBO.getPropertyValue("dateOfLoss"));
	// ..
	ValidationResult resultOut=rp.validateClaim(claim);
	// map result and claim
	for (Issue i: resultOut.getIssues()) {
		issues.addArrayData(i.getMessage());
	}
// …

The TWObject and TWObjectFactory definitions are in the pscInt.jar in the folder <was.home>/BPM/lombardi/lib.

Once the code is completed, you need to package your code and any dependent classes within a JAR, then upload it in your process application or toolkit by selecting Library => File => Server File. Once uploaded, you need to create an Integration Service, use the Java Integration from the palette, as shown in Figure 16. In the definition property you can specify the class from the JAR you uploaded, and select the method the service needs to use.

Figure 16. Java integration using a Java decision service and a RES API
Java integration using a Java decision service and a RES API

(See a larger version of Figure 16.)

Checking Translate JavaBeans is helpful when the Java method returns a complex class with regard to the Java Bean specification. This flag leads the integration layer to marshalize the returned data graph as an XML document, which can be mapped to XMLElement process variables.

As the previous code listing illustrates, the method delegates to the rule processing class, which itself use the RES API (see Part 1 for the code sample). The rulesets are deployed in the RESDB database and the RES session accesses the XU deployed in the Process Server. The topology is the same as that shown in Figure 13.

This approach enforces coding all the mapping from TWObject and TWList to Java beans and back. The code is error-prone and cumbersome. Another approach is to use a JSON document or XML document passed as string or XMLElement as parameter and return types, then marshalize them in a Java bean, using JAXB, for example. Another solution is to use TWObject as an XOM class. This will be covered in Part 3 of this series.

The co-deployment of the Rule Execution Server within Process Server avoids serializing data for remote calls, improving performance. But the physical resources defined to support Process Server may be overkill for rule processing, impacting the deployment cost of the solution.


Conclusion

The recommended approach to integrate IBM BPM and ODM, which provides better performance, more control and flexibility, is to use the decision service with a Java implementation exposed with a JAX-WS web service technology and Java XOM. With this apprach, you can control the service exposition model to enable coarse grained operations, and let the service responsible manage its own data access strategy. Multiple operations within the decision service can be implemented on top of the same ruleset. This approach is well suited to a service-oriented architecture (SOA), simple to develop, and can be tested in isolation. The rule processing server is centralized, and virtualization can be used for cloud deployment. The same Java code can be exposed with RESTful services, so the choice of protocol will not impact the implementation.

IBM BPM has a rich palette of integration capability, and the choice of one integration approach over another other will depend of the service design, the implementation approach and the requirements. In IBM BPM, web service consumption is the most flexible and independent of the deployment model of the server. Because all those capabilities are very easy to use, I recommend prototyping each integration capability to assess the one that best addresses your needs: one size does not fit all.

Again, the most sensitive subject is the information model and how to reuse some more stable entities versus defining the ones for your concerns.

In the final part of this series, I'll cover RESTful decision services, Service Data Objects (SDO), TWObject, and mediation flow.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Business process management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Business process management, WebSphere
ArticleID=851429
ArticleTitle=Best practices for designing and implementing decision services, Part 2: Integrating IBM Business Process Manager and IBM Operational Decision Management
publish-date=12122012