Introduction to SCA runtime
In the first article of this series, the Service Component Architecture (SCA) specification is one of the main building blocks in the IBM® Business Process Management (BPM) stack. In other words, SCA is the implementation specification for SOA as an architectural style. Part 1 of this series introduced the operational reference architecture (see Figure 1).
Figure 1: Operational reference architecture for WebSphere Process Server
This article provides an overview of the concrete runtime implementation and related artifacts, such as modules and components in the context of WebSphere® Process Server (hereafter called Process Server). In this respect, the focus lays on showing how things are implemented and which components are important from an operational perspective. As Process Server relies on WebSphere Application Server (hereafter called Application Server) technology and functions, you can expect to see many J2EE components, some of which might already be familiar to you.
The SCA specification introduces a new type of concept called module to
create service-oriented business applications. An SCA module characterizes
itself as an application that exposes services to the outside world and
furthermore implements a certain kind of business logic. From an
integration developer's perspective, a business application can consist of
a set of different modules that interact with each other on a
business-focused level. For example, the module called
InvoiceHandling uses services exposed by the
CustomerManagement module. SCA intentionally tries to provide
this kind of business-focused view, the concrete technical implementation,
which answers the question of how the actual service call of the
CustomerManagement module gets implemented. It is hidden in
many cases. However, this makes it even more important for administrators
and architects to collaborate to gain an understanding of the operational
aspects of SCA modules for a successful implementation within a Process
From a more technical point of view, SCA modules are based on J2EE technologies and make use of different kinds of concepts that people have been using for years: Enterprise Java™ Beans (EJB), Message Driven Beans (MDB), Java 2 Connector Architecture (J2C) and so on. In terms of J2EE, the structure of an SCA module can be described as follows (see Figure 2).
Figure 2: Operational view on SCA modules
One SCA module results in one Enterprise Archive (EAR) file for deployment that contains a set of J2EE artifacts:
- 0..1 Web Archive (WAR) is only created when Web services or REST services are exposed by a module.11.
- 1..1 EJB module with a number of Beans:
- 1..1 Stateless Session Bean (SSB) for synchronous SCA communication.
- 1..1 MDB responsible for asynchronous SCA communication.
- 0..n Depending on the contained SCA Components: a certain number of EJBs and/or MDBs that build a facade around the Components' implementations.
- 1..n Utility JARs
- The SCA module content itself (besides core Service Component Definition Language(SCDL) files this may also contain configuration files for certain SCA Component implementations).
- One for each Library (contains the developer's Interfaces and Business Objects, actually XSD and WSDL documents).
- A number of JAR files that provide additional functionality (for instance Log4J or other custom developed code).
The previously introduced SCA modules are containers for other SCA artifacts, such as SCA components. These are considered as artifacts that provide and consume business services on a more granular level than SCA modules. For instance, the previously mentioned CustomerManagement module may contain one component responsible for creating new customers (CustomerCreation) and another one for querying information on existing customers (CustomerQuery). SCA components live in the boundaries of an underlying SCA module and only expose certain defined services to the outside world. The actual implementation type of components is not specified by the SCA Assembly Model (see the Resources section, SCA Assembly Model). Instead each vendor of a SCA runtime can provide a different set of implementation types. In terms of Process Server, IBM offers a large variety of SCA Component implementations in the development environment (WebSphere Integration Developer) for use in the runtime.
Commonly used implementation types:
- Process (BPEL). Enables execution of business processes. This implementation involves the use of BPC (see the next section) and Process Server's data silos (see Resources for Part 1 of this series, section "Database").
- Human Tasks. Enable human interaction with business processes. This implementation involves the use of BPC (see the next section) and Process Server's data silos (see Resources for Part 1 of this series, section "Database").
- Java. Lets Integration Developer's write custom Java code within a module.
- Rule Group. Provides the ability to define certain business specific parameters (for example, Buying Approval Cap), which can be used to change the behavior of business processes at runtime. This involves the use of Process Server's data silos (see Part 1 of this article series (Database section) in the Resources section of this article). Interface Maps. Enables data and interface transformation.
Figure 3. Operational view on SCA Components
From an operational point of view, the actual assembly model of an SCA module is almost completely hidden at runtime (see Figure 2). Therefore neither operators nor administrators have a clear visual representation of the SCA components contained in a module. The only person who can provide a visual overview of the module contents is the Integration Developer. At this point, it is important to keep in mind that detailed information on the contained SCA Components and their implementation types is required in a variety of situations: troubleshooting and performance analysis for example.
Recommendation: Therefore, it is important that the development and operation teams extensively share information. Due to the higher complexity of SCA solutions, in comparison with traditional J2EE projects, this aspect is highly critical for the successful operation of a Process Server infrastructure.
As already mentioned in the first part of this article series, Process Server provides IBM's implementation of the SCA specifications. The so-called SCA runtime (as shown in the operational reference architecture) provides the actual functionality to run applications that adhere to the SCA programming model. The SCA runtime is one of the core building blocks of a Process Server environment. The following picture shows an overview of the runtime's operational architecture:
Figure 4. Operational architecture of SCA runtime
The J2EE capabilities provided by WebSphere Application Server builds the foundation for the concrete SCA implementation. The SCA runtime heavily uses functions as transaction handling, workload balancing, and database connectivity to access components (Database and Service Integration Bus) in Process Server's infrastructure layer (see Part 1 of this article series in the Resources section of this article). In addition, this implies that the SCA runtime also exposes the high- quality of service of the underlying J2EE container to the SCA modules running on it.
- SCA implementation JARs:
- The Java archives contain IBM's vendor specific implementation code such as the SPI (Service Provider Interface) used by the different SCA Component implementations (Business Process, Java, Adapter, State Machine, Human Task, Business Rule, Interface Map, Selector) and the API which may be leveraged by business applications.
- Messaging resources:
- The Service Integration Bus (SIB) named
SCA.SYSTEM.<busID>;.Bus is mainly used
for asynchronous SCA communication between modules and Components. It
contains a number of Queues that are automatically generated at
deployment time and relate to specific SCA modules. These Queues can
be considered as channels between the different components in a SCA
environment, whereas the SCA runtime is responsible for routing the
Additionally, there exists a Bus called SCA.APPLICATION.<busID>.Bus. From an operational perspective it is not as important as the SYSTEM Bus, because it is intended to contain Queues for certain scenarios that are not related to the base SCA communication. This Bus can be used to define custom Queues when for example JMS Imports are used inside your SCA module.
- Extensions to the Integrated Solutions Console (ISC):
- The ISC formerly known as WebSphere Admin Console is extended with some additional SCA specific views. These enable operators and administrators to gain a peek of the developer's perspective on the deployed SCA modules (see also section "Supporting services"). However, these features cannot replace the required knowledge sharing between developers and operating people.
SCA runtime highlights
The SCA runtime provides the core functionality to run SCA modules and is therefore one of the main building blocks of Process Server. It is important to keep in mind that the SCA programming model hides most parts of a module's concrete technical implementation from the developer. It allows the developer to focus on the business aspects of an application. However, this can lead to complex error scenarios and hard to resolve performance issues at runtime that cannot be foreseen by operations people.
Therefore it is essential for a successful Process Server operation to drive extensive knowledge sharing between developers and administrators.
Developers should share extensive information with administrators to address the following questions:
- Are components implemented as Processes and/or Human Tasks?
- Are processes long-running or microflows (see section Business Process Choreographer)?
- How many modules belong to the application and how are they connected?
- Which interfaces do modules expose to the outside world?
- Does the module access the BPC or HTM APIs (see section WebSphere Business Process Choreographer)?
WebSphere Business Process Choreographer
As already mentioned in the first part of this article series, the WebSphere Business Process Choreographer often uses Process Server functionality. Many projects use business processes and/or human interactions to implement business requirements. The Business Process Choreographer is composed of the components Human Task Container and Business Process Container.
The Business Process Choreographer runs inside the J2EE runtime of WebSphere Application Server and uses different Application Server components like the Service Integration Bus and the Transaction Manager. Figure 5 provides an architectural overview of the Business Process Choreographer.
Figure 5: Operational architecture of Business Process Choreographer
The architecture mainly consists of two enterprise applications, the Business Process Container (called BPEContainer) and the Human Task Container (called TaskContainer). These applications access the BPE database and some Business Process Choreographer- specific queues located on the Business Process Choreographer Bus (see the Resources section), WebSphere Process Server operational architecture: Part 1: Base architecture and infrastructure components.
Every queue has a special function.
- BPEIntQueue: Internal queue in the BPE Container used for navigation between activities inside a process.
- BPERetQueue: Internal queue that contains messages that cannot be processed in the BPEIntQueue and need to be retried. The retry limit is configurable via the Business Process Container settings.
- BPEHoldQueue: Internal queue holding any messages that failed processing more times than the retry limit. Messages in the BPEHoldQueue can be replayed using the ISC or the wsadmin script replayFailedMessages.py.
- HTMIntQueue: A human task container internal queue.
- HTMHldQueue: Internal queue holding any messages that failed
processing in the HTMIntQueue. Messages can be replayed using the ISC
or the wsadmin script replayFailedMessages.py.
An administrator needs some kind of monitoring of the Hold Queues and should replay the messages as soon as the system runs stable again. Messages on the Hold Queues may be a pointer that some of the runtime components of Process Server are not available or are not working correctly.
- BFMJMSAPIQueue: Queue for accessing business process applications through the Java Massaging Services API. Requests from JMS clients are sent to this queue.
- BFMJMSCallbackQueue: Another Queue for the JMS API. Responses with callback to JMS clients are sent to this queue.
- BFMJMSReplyQueue: JMS API Queue where responses to JMS clients are sent.
All Queues are created when setting up the Business Process and the Human Task Container using the provided wizards or wsadmin scripts. Also Connection Factories for accessing those Queues are created. These come with some default values (maximum connections for example) and may need to be tuned for optimal performance (see the Resources section, Websphere Business Process Management V6.1 Performance Tuning.
As well as the Queues and Connection Factories are created during setup of the Business Process Choreographer configuration the Datasource for the Business Process Choreographer database (BPEDB) is created with default values (that is, the number of maximum connections in the Datasource's Connection Pool). Tuning may be needed at this side of the configuration too
Human Task Manager and Business Flow Manager
As mentioned before, the Business Process Choreographer manages the life cycle of business processes and human tasks. Therefore, the Business Flow Manager (called BFM) and the Human Task Manager (called HTM) are introduced. The following picture shows how these can be accessed to use their functionality of enabling the execution of business processes and human tasks.
Figure 6: Business Process Choreographer - Interfaces and Clients
Both Human Task Manager and Business Flow Manager provide a generic API (see the Resources section, WebSphere Process Server API and SPI documentation) that can be accessed through different Interfaces. The HTM API can be accessed via a Web Service or EJB Interface, whereas the BFM API is also available through a JMS Interface.
Consider a few different possibilities to access those APIs:
- You can make use of Process Servers build in clients like the BPC Explorer (see the Resources section.
- You can use Process Server's Client Generation Framework to generate a client.
- You can write your own custom client.
From an operational point of view, the usage of different interfaces requires you to monitor and tune parameters concerned with resources used for providing access to those interfaces.
- The Web Container thread pool sizes for SOAP/HTTP requests
- Queue sizes for the usage of the JMS API
- Security consideration for using the Enterprise Java Beans API and Web Services API operations
- Database query optimization for some of the APIs' operations like querying business process and task related objects (usage of materialized views or stored queries)(see the Resources section)
For administrative purposes, the JMX MBean API is a generic way to manage human tasks and business processes related aspects inside WebSphere Process Server. The JMX MBean API provides access to various Mbeans, for example, the Human Task Manager (TaskManagement MBean).
Some common Process Server operational tasks are:
- Querying and replaying failed messages from HTM and BFM hold queues
- Refreshing staff queries
- Removing unused staff queries
- Querying Failed Events (see section "Failed Event Manager")
- Changing HTM or BFM configuration
MBeans are useful for the automation of administrative tasks and monitoring of the process server environment.
Business processes and the Business Flow Manager
The Business Flow Manager's main task is the execution and management of business processes written in the Business Process Execution Language (hereafter called BPEL). BPEL processes in the WebSphere Process Server environment can be divided into two types: Microflows and long-running processes. They are quite different in their handling. Microflows cannot involve people interaction (Human tasks) and are automatic from start to finish. Furthermore they cannot contain timer operations (wait, onAlarm) or receive any kind of asynchronous messages (for example, Event Handler). They run in a single transaction and their state is only hold in memory and not persisted to the process server database.
Long-running processes can invoke services that do not respond immediately and can involve people interaction. Their state is persistent and they can run over a long time. This is why versioning becomes an important factor for those types of processes when modules holding such processes need to be updated.
Some parameters in the BFM configuration can affect runtime and performance:
- Retry limit. The number of retries for a failing call
- Retention Queue message limit. Maximum number of messages allowed on the retention queue before the business flow manager processing switches to quiesce mode
- Audit logging. (Consider to switch off, if not needed)
- CEI logging. (Consider to switch off, if not needed)
Switching off audit log and CEI means to disable the intended means to do auditing and monitoring. The Business Process Database is not intended to be used as an 'audit store' because of performance impacts and that there is only a snapshot and no complete history. So keep that in mind when switching off the above monitoring features.
The type of process, for long-running processes the transactional behavior inside the processes and the interaction type with other components is determined at development time. From an operational perspective, this is quite important because depending on the transactional boundaries and the involved protocols, errors in processes can surface at different places. Only the developers can influence where and how.
Synchronous interaction style:
- Errors are delivered to the caller.
Asynchronous interaction style:
- Failed Events
- Failed processes in BPC Explorer
- Stopped activities in BPC Explorer
- Service Integration Bus (SI Bus) exception Destinations
- Hold Queues in Business Process Choreographer
You should constantly monitor the different places where errors can surface to ensure the health of the system. Tools and applications are available to help you monitor those places. For queue monitoring (Service Integration Bus destinations and Hold Queues) the Service Integration Bus Explorer (see the Resources section) and for Failed Events the Failed Event Manager (see section "Failed Event Manager").
Human tasks and the Human Task Manager
The main task of the Human Task Manager is to manage human tasks described in Task Execution Language (TEL). Human interactions can be defined as Inline Tasks (implemented as part of a business process) or as Stand-alone Tasks (implemented as SCA component). The developer is responsible for task assignments (define which users are allowed to perform a certain role's work associated with that task).
From an operational perspective, the administrator needs to know that depending on the assignment of tasks to persons or groups, performance of the process server can decrease dramatically. For example, if a group is assigned to a task's role and Group Work Items are not used a Work Item is created for every user in that group, which might result in thousands of Work Items created for one task instance.
Recommendation: As best practice, you should enable Group Work Items in the Human Task Container's settings and use the Group staff verb for task assignments inside the Human Task.
Special values in the HTM configuration can affect the runtime of WebSphere Process Server or even the developed applications:
- People resolution. Refresh schedule and timeout affect how often your staff queries are refreshed. If you have a fast changing organization, you may need a refresh more often than the default.
- E-mail service. Configure the e-mail settings for escalation mails.
- Custom properties that affect the performance and behavior of BFM (see Resources section for a detailed description).
- Audit logging. (Consider switching off, if not needed)
- CEI logging. (Consider stwitching off, if not needed)
Business Process Choreographer highlights
The Business Process Choreographer provides all functionality to run BPEL processes and Human Tasks inside WebSphere Process Server. The developer is responsible for deciding on the implementation and the qualifiers that result in transactional boundaries being defined and different styles of interaction being used. Therefore communication between developers and operators is essential for a successful Process Server environment.
To prevent runtime and/or performance problems occurring in production due to an Business Process Choreographer that is not tuned the following tasks should be performed in preproduction:
- Load testing
- Failover testing
- Stress testing
Common Event Infrastructure
This section introduces the general concepts behind the Common Event Infrastructure (CEI) and describes how it is utilized within Process Server V6.1. The CEI is IBM's infrastructure to emit, distribute, persist, and consume standardized events. The Event format used by the CEI is the Common Base Event (CBE) standard that defines the general structure of a CBE and its XML representation.
The following figure shows the building blocks which make up the CEI.
Figure 7: Building blocks of the Common Event Infrastructure
The central component in the infrastructure is the Event Service. It is responsible for processing the CBEs, writing them to the Event Datastore, and distributing them to JMS Destinations. An Event Service can be activated for clusters or servers and the corresponding application is also installed there.
This section talks about the default Event Service for simplicity, although there can be more than one Event Service per Process Server cell.
With CEI, you are able to send Events synchronously or asynchronously to the Event Service.
Event Service Message Driven Bean
For the asynchronous communication, a Message Driven Bean (MDB), called Event Service MDB, is installed and retrieves the events using JMS mechanisms.
As described in the first part of this article series, a Service Integration Bus (SIB) is the realization of the default (JMS) messaging provider of Process Server.
When the Event Service is configured, the Event Service MDB for the default messaging provider is installed along with all resources to receive Events asynchronously. It is possible to install more Event Service MDBs for other JMS providers for the same Event Service. This is also shown in Figure 6. Nevertheless we will only refer to the default Event Service MDB. The following list shows the relevant resources for asynchronous communication:
- Common Event Infrastructure (SI-)Bus
- Destinations on the CEI Bus for each Event Service
- JMS Artifacts pointing to the destinations
Event Emitter and Event Consumer
To help you understand the configuration of the CEI, this article describes everything from the Event Emitter to the Event Consumer, including the artifacts that are used to abstract from the resources.
- Event Emitter Factory Profile
- The Event Emitter Factory Profile is the most important configuration artifact within the study of Event Emitters. You can configure an Event Emitter Factory to send the Events synchronously and or asynchronously. You can do this by using the so-called Event Bus Transmission Profiles for synchronous sending and JMS Transmission Profiles for the asynchronous sending of events. You can also configure the preferred communication style. The Emitter can define the communication style on its own. With respect to performance and reliability, it is important that the Event Emitter Factory Profile defines whether an event is sent within the same or its own transaction.
- Event Bus Transmission Profile
- An Event Bus Transmission Profile defines where the corresponding Event Service is running for sending the events (the Event Service can run within the same cell or a different cell). A JMS Bus Transmission Profile references the JMS Queue and the JMS Queue Connection Factory to which to send the CBEs.
- Filter Factory Profile
- You can specify a custom filter in a Filter Factory Profile to send
only specific Events to the Event Service.
When a CBE is sent by an application, it retrieves the Event Emitter Factory Profile and the Transmission Profile using the Java Naming and Directory Interface (JNDI). These artifacts are used by the CEI API to create an Event Emitter Factory object that is used to create the Emitter objects. The Events are either sent synchronously or asynchronously depending on the Transmission Profile that you use. In the synchronous case, the communication between the Event Emitter and the Event Service is done by standard EJB mechanisms. In the asynchronous case, the Event is sent to the defined JMS Queue and processed by the corresponding Event Service MDB, which forwards the CBE to the Event Service.
What the Event Service is doing with the received CBEs depends on the configuration. You can configure the Event Service to write all Event data to a Datastore and or to distribute the events to other JMS Destinations.
Event Group Profiles
CBEs that should be routed to the same JMS Destinations are embraced by so-called Event Group Profiles. Each Event Group Profile specifies a filter condition that defines the Events that belong to that particular Event Group. Furthermore it determines if the Events of this Event Group are stored in the Event Datastore. For each Event Group, you can define several JMS Queues and at most one JMS Topic where the corresponding CBEs should be sent to by the Event Service.
Important: It is crucial that you understand the impact of storing the Event data of an Event Group in the Event Datastore. The implication is that this might become a performance issue if you are dealing with large CBEs or if you are running under heavy load. Also the CBEs are not released automatically from the Datastore and therefore the overall behavior of the CEI might get worse over time or blow your file systems.
An Event Consumer application retrieves the CBEs by either querying the Event Service directly (using the CEI API) or getting the Events over JMS. The first way is only applicable if the Event data has been stored before.
Operational view on CEI
The following picture illustrates the concepts described above for the default messaging. All required resources (the CEI-Bus, Destinations, and so forth.) are created along with the configuration of the Event Service automatically. A set of default configuration artifacts is created also.
Figure 8: Operational view on CEI
For an Event Emitter, a Default Event Emitter Factory Profile is generated, which references a synchronous and an asynchronous Emitter Profile and the preferred interaction mode is set to "synchronous". The artifacts referenced by the Default Event Emitter Factory Profile are the Default Event Bus Transmission Profile and the Default JMS Transmission Profile. The first is referring to the Event Bus EJB, which is installed during the configuration of the Event Service itself. The Default JMS Transmission Profile indicates that Events are transmitted to the CEI-Bus. On this bus, the server or cluster running the Event Service is registered as a Bus Member.
By default, the corresponding Messaging Engine is running inside that server respectively within that cluster. In more complex Process Server infrastructures, it is a good practice to separate the running Messaging Engines from the deployed SCA modules (usually on the so-called Messaging Cluster).
Events on the CEI Queue Destination are picked up by the Event Service MDB for default messaging and are routed from there to the Event Service.
For the Event Service, an Event Group Profile (All Events Group Profile) is created that routes ALL CBEs to a special topic and stores the Event data in the Event Datastore. The corresponding Topic Space on the CEI-Bus is called the All Events Topic Destination.
This is important when you think of high load scenarios or of how large Event data is transmitted: By default, all the Event data is stored. In production systems, it is recommended to do either one of the following for performance reasons:
- Switch off the persistence of the Event data
- Remove the All Events Group Profile
Common Event Infrastructure highlights
As you work with business processes, you need to understand that administrators of business processes require a great deal of information, for example, the execution time of a process instance. Process Server and the WebSphere Integration Developer (Integration Developer) enable a process developer to emit CBEs by either just checking an option for standard Events or entering a short Java code fragment to emit CBEs with special content. Some examples of standard Events for business processes are Events containing the start time of the process and the end time of an activity. As an Event Consumer, you can write your own application or just use the BPC Observer (see the Resources section) or products like WebSphere Business Monitor to aggregate information from Events, display it real-time; and analyze historical data.
You can use the Event data in a variety of ways. Always be aware of the performance impact Event submission and persistence might have depending on the configuration of the resources related to the Common Event Infrastructure.
In this chapter, we are going to describe additional services that administrators need to be aware of when operating Process Server infrastructures. The following list highlights some of the applications needed to manage supporting services:
- Business Rules - Business Rules Manager
- Relationships - Relationship Manager
- Selectors - Integrated Solutions Console
- Application Scheduler - Integrated Solutions Console
- Adapters - Integrated Solutions Console
- Failed Events - Failed Event Manager
The Failed Event Manager will be discussed in more detail because of its importance for operating a Process Server environment properly.
Business Rules and the Business Rules Manager
The Business Rules Manager is a Web-application that helps business analysts to administer Business Rules. Business Rules are developed using Rule Group SCA components. Rule Groups are used to select either Rule Sets or Decision Tables based on some selection criteria. The most common example is the selection based on the current date.
A Rule Set itself is built up by Rule Templates and Rules. The following types of (Business) Rules can be created within a Rule Set:
- Action Rules. Indicate an action like invoking another SCA component, without specifying any condition. Therefore Action Rules are always executed when the Rule Set is called.
- If/Then-Rules. Similar to Action Rules but additionally specify a condition indicating when the corresponding action will take place.
- Template Rules. Instantiations of Rule Templates.
You can view Rule Templates as abstractions from Action or If/Then-Rules and add the ability to define configurable parameters. These parameters can be presented inside a more readable fashion to a business analyst. This is where the Business Rules Manager comes into play. It is used to change values of the parameters of Template Rules, to define new Rules based on it. Such changes can be made active at runtime.
For administrators, it is quite important to know that due to the usage of Business Rules, runtime behavior might be changed. These take effect immediately when publishing the changes in the Business Rules Manager. Business Rule data is stored in the Common-DB of Process Server. Therefore the Business Rules Manager does not run properly when the Common-DB becomes unavailable, which further has an effect on components trying to use Business Rules.
Relationships and the Relationship Manager
Relationships are artifacts that help to correlate different Business Objects in Process Server. A concrete mapping entry for the Relationship is called a Relationship Instance. These artifacts can be administered at runtime using the Relationship Manager application. Relationship data is also stored in the Common-DB.
For example, imagine a static Relationship between a Business Object that needs the ZIP code information for the address and another that needs the city's name. If for some reason, you forgot to map ZIP codes to names in the Relationship, you can add them after deployment using the Relationship Manager. It is also important to keep in mind that changing Relationships might change runtime behavior.
Selectors are SCA components that were introduced for dynamic routing of calls to other SCA components based on some selection criterion (mostly the current date). It is possible to add or change SCA destinations of Selectors at runtime via the WebSphere Integration Console.
One common example of using Selectors is the case of SCA module exchange. Imagine a selector component calls another module that should be replaced by a new version. If the Selector is based on the current date, the administrator would install the new version of the module and add another condition to the Selector to call the new version beginning at an activation date. Calls after the activation date will then be routed to the new version and prior call will still be handled by the old module version.
The Application Scheduler provides the ability to start and stop applications based on the date and time information. You can use it to run applications, for example, those that perform some sort of cost-intensive operations only at night when the load on the system is low. Data for the Application Scheduler is stored in the Common-DB of Process Server.
Failed events and the Failed Event Manager
This article describes the Failed Event Manager in detail because it is crucial for operating Process Server properly.
The SCA runtime behaves differently from the usual SIB error handling when problems during asynchronous communication paths arise. This article discuss a special situation where so- called Failed Events are created. Imagine an inter-module, one-way asynchronous call from one SCA module to another. The main focus here is on answering the question: What happens if the target module is not installed and a request is sent?
The SCA runtime is not able to find the corresponding target destination on the SCA System Bus for the request. The caller cannot be informed of the error using exceptions or similar, because the communication is one-way and therefore no response is created to indicate that error. Usually you would consider these errors to be processed on the SIB level, that is, that messages are put to the System Exception Queue defined for the sender. Nevertheless, System Exception Queues are difficult to deal with because no standard mechanism exists to put messages back to the relevant destination after fixing the problem. Therefore, Process Server uses the so-called Failed Event Manager that is specially designed for such situations.
The following figure below shows what happens when the described problem occurs. The SCA Runtime is not able to find the target destination and creates a Failed Event Message, which in that case is put to a special destination on the SCA System Bus (step 1.1) and also logs a statement that a Failed Event occurred. The Failed Event Destination has to fulfill the following naming convention for a Server or Cluster where the invoking SCA module is running:
The Failed Event Manager that is mapped to all Cluster and Servers where SCA modules might run, picks up the message from those destinations (step 1.2) and stores the Failed Event data to the Common-DB of Process Server (step 1.3). This data consists of information such as the name of the calling module and target module, the business data contained when trying to make the asynchronous SCA call, some error information, and so forth.
Figure 9: Operational view on Failed Event Manager
An administrator has to monitor the Failed Event Manager Frontend Application, which is part of the Integrated Solutions Console. This Application queries the Failed Event Manager to get information on Failed Events from the Process Server Common Database (steps 2.1-2.2). The administrator is now able to analyze the error situation and to fix the described problem by installing the SCA module that should be called.
After fixing the problem the administrator can resubmit the Failed Event using the Front-end Application (step 3.1). From the Failed Event data the original message is extracted and put to the target SCA Destination on the SCA System Bus (step 3.2). From there the message is consumed by the SCA Runtime and normal processing continues (step 3.3).
In many situations analyzing Failed Events can help to find causes of unexplainable behavior on first sight. One example is that Business Processes might hang (stay in the running state) because of underlying SCA communication errors. Obviously a process can not continue processing if calling another SCA module ended in the creation of a Failed Event without sending a response back, because beside the situation described above also two-way calls might be interrupted by a Failed Event as well.
This chapter gives a short summary of the operational aspects of adapters. As there are many different adapters available, we will focus on WebSphere Adapters (based on the J2EE Connector Architecture (hereafter called J2CA)). J2CA is defined in terms of the following two sets of contracts (see Figure9: Parts of J2CA (source: http://www.ibm.com/developerworks/library/ws-soa-j2caadapter/index.html)):
- Service Provider Interface (SPI) - defines how the application server will interact with the adapter
- Common Client Interface (CCI) - defines how clients see the adapter.
Figure 10: Parts of J2CA (source: http://www.ibm.com/developerworks/library/ws-soa-j2caadapter/index.html)
As resource adapters are used to connect to Enterprise Information Systems, they can play an important role in a Service Oriented Architecture. Data is stored in different locations and connectivity is needed when accessing the data.
When using an adapter, one must keep in mind that only J2CA and WebSphere Adapters can be part of a transaction (and only some adapters support two-phase commits). Also availability may be a point to consider as only J2CA and WebSphere Adapters can be managed by WebSphere's High Availability Manager.
WebSphere Adapters can be installed differently:
- as part of an application
- standalone as Resource Adapter Archive (RAR)
Installing WebSphere Adapters within an application
The default way, if not explicitly set during the creation of the application that uses the adapter, is deploying the adapter RAR-File within the application. Resources like J2CConnectionFactories for outbound adapters are managed via the application management pages. A redeploy of the application will overwrite changes done on this site. Also resources defined at the application level cannot be reused by other applications. An advantage of deploying the RAR-File within an application is the classloader isolation.
Installing the WebSphere Adapter RAR File standalone (6.1+)
Installing the WebSphere Adapter independently of an Enterprise Archive changes the administration of the adapter. Resources can be managed independently of the application that uses the adapter and are not overwritten during deployment/ update of the application.
But be careful! There is no classloader isolation for standalone adapters. There may be only one copy of the adapter JAR file and third-party libraries in your Java Virtual Machine.
Supporting services highlights
Administrators and operators of a Process Server environment cannot circumvent getting in contact with the management of Adapters, Business Rules, Selectors, Relationships, Application Schedulers and especially Failed Events. Since the work with all of these components may have a significant effect on the deployed business applications (or even other remote applications using them), the cooperation between the developer's and operations people is crucial.
In the second and final part of this article series, you got to know many components located in Process Server Runtime Layer and Function Layer from an operational point of view. Among others, you learned how the SCA Runtime builds the basis for running business applications and which aspects are important to successfully run BPEL processes on your Process Server infrastructure.
- "WebSphere Process Server operational architecture: Part 1: Base
architecture and infrastructure components"
(developerWorks, Sept 2008) provides information about WebSphere Process Server architecture and infrastructure components.
- SCA Assembly Model
- WebSphere Process Server API and SPI documentation
- Websphere Business Process Management V6.1 - Performance Tuning
- BPC Explorer
- BPC Observer
- Service Integration Bus Explorer
- Business Flow Manager Custom Properties
- Additional resources:
"Recovering from failed asynchronous SCA service invocations on WebSphere Process Server"
(developerWorks, Sept 2008) This article describes potential message routes and recovery scenarios. It explains how to configure the system to set up recovery, and it covers a wide range of SCA messaging options, including both WebSphere Default Messaging (JMS provider) and WebSphere MQ.
Dig deeper into Business process management on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Experiment with new directions in software development.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.