As a business process analyst or developer, your understanding of the process engine in IBM Business Automation Manager Open Editions can help you design more effective business assets and a more scalable process management architecture. The process engine implements the Business Process Management (BPM) paradigm in IBM Business Automation Manager Open Editions and manages and executes business assets that comprise processes. This document describes concepts and functions of the process engine to consider as you create your business process management system and process services in IBM Business Automation Manager Open Editions.
Process engine in IBM Business Automation Manager Open Editions
The process engine implements the Business Process Management (BPM) paradigm in IBM Business Automation Manager Open Editions. BPM is a business methodology that enables modeling, measuring, and optimizing processes within an enterprise.
In BPM, a repeatable business process is represented as a workflow diagram. The Business Process Model and Notation (BPMN) specification defines the available elements of this diagram. The process engine implements a large subset of the BPMN 2.0 specification.
With the process engine, business analysts can develop the diagram itself. Developers can implement the business logic of every element of the flow in code, making an executable business process. Users can execute the business process and interact with it as necessary. Analysts can generate metrics that reflect the efficiency of the process.
The workflow diagram consists of a number of nodes. The BPMN specification defines many kinds of nodes, including the following principal types:
-
Event: Nodes representing something happening in the process or outside of the process. Typical events are the start and the end of a process. An event can throw messages to other processes and catch such messages. Circles on the diagram represent events.
-
Activity: Nodes representing an action that must be taken (whether automatically or with user involvement). Typical events are a task, which represents an action taken within the process, and a call to a sub-process. Rounded rectangles on the diagram represent activities.
-
Gateway: A branching or merging node. A typical gateway evaluates an expression and, depending on the result, continues to one of several execution paths. Diamond shapes on the diagram represent gateways.
When a user starts the process, a process instance is created. The process instance contains a set of data, or context, stored in process variables. The state of a process instance includes all the context data and also the current active node (or, in some cases, several active nodes).
Some of these variables can be initialized when a user starts the process. An activity can read from process variables and write to process variables. A gateway can evaluate process variables to determine the execution path.
For example, a purchase process in a shop can be a business process. The content of the user’s cart can be the initial process context. At the end of execution, the process context can contain the payment confirmation and shipment tracking details.
Optionally, you can use the BPMN data modeler in Business Central to design the model for the data in process variables.
The workflow diagram is represented in code by an XML business process definition. The logic of events, gateways, and sub-process calls are defined within the business process definition.
Some task types (for example, script tasks and the standard decision engine rule task) are implemented in the engine. For other task types, including all custom tasks, when the task must be executed the process engine executes a call using the Work Item Handler API. Code external to the engine can implement this API, providing a flexible mechanism for implementing various tasks.
The process engine includes a number of predefined types of tasks. These types include a script task that runs user Java code, a service task that calls a Java method or a Web Service, a decision task that calls a decision engine service, and other custom tasks (for example, REST and database calls).
Another predefined type of task is a user task, which includes interaction with a user. User tasks in the process can be assigned to users and groups.
The process engine uses the KIE API to interact with other software components. You can run business processes as services on a KIE Server and interact with them using a REST implementation of the KIE API. Alternatively, you can embed business processes in your application and interact with them using KIE API Java calls. In this case, you can run the process engine in any Java environment.
Business Central includes a user interface for users executing human tasks and a form modeler for creating the web forms for human tasks. However, you can also implement a custom user interface that interacts with the process engine using the KIE API.
The process engine supports the following additional features:
-
Support for persistence of the process information using the JPA standard. Persistence preserves the state and context (data in process variables) of every process instance, so that they are not lost in case any components are restarted or taken offline for some time. You can use an SQL database engine to store the persistence information.
-
Pluggable support for transactional execution of process elements using the JTA standard. If you use a JTA transaction manager, every element of the business process starts as a transaction. If the element does not complete, the context of the process instance is restored to the state in which it was before the element started.
-
Support for custom extension code, including new node types and other process languages.
-
Support for custom listener classes that are notified about various events.
-
Support for migrating running process instances to a new version of their process definition
The process engine can also be integrated with other independent core services:
-
The human task service can manage user tasks when human actors need to participate in the process. It is fully pluggable and the default implementation is based on the WS-HumanTask specification. The human task service manages the lifecycle of the tasks, task lists, task forms, and some more advanced features like escalation, delegation, and rule-based assignments.
-
The history log can store all information about the execution of all the processes in the process engine. While runtime persistence stores the current state of all active process instances, you need the history log to ensure access to historic information. The history log contains all current and historic states of all active and completed process instances. You can use the log to query for any information related to the execution of process instances for monitoring and analysis.
Core engine API for the process engine
The process engine executes business processes. To define the processes, you create business assets, including process definitions and custom tasks.
You can use the Core Engine API to load, execute, and manage processes in the process engine.
Several levels of control are available:
-
At the lowest level, you can directly create a KIE base and a KIE session. A KIE base represents all the assets in a business process. A KIE session is an entity in the process engine that runs instances of a business process. This level provides fine-grained control, but requires explicit declaration and configuration of process instances, task handlers, event handlers, and other process engine entities in your code.
-
You can use the RuntimeManager class to manage sessions and processes. This class provides sessions for required process instances using a configurable strategy. It automatically configures the interaction between the KIE session and task services. It disposes of process engine entities that are no longer necessary, ensuring optimal use of resources. You can use a fluent API to instantiate
RuntimeManager
with the necessary business assets and to configure its environment. -
You can use the Services API to manage the execution of processes. For example, the deployment service deploys business assets into the engine, forming a deployment unit. The process service runs a process from this deployment unit.
If you want to embed the process engine in your application, the Services API is the most convenient option, because it hides the internal details of configuring and managing the engine.
-
Finally, you can deploy a KIE Server that loads business assets from KJAR files and runs processes. KIE Server provides a REST API for loading and managing the processes. You can also use Business Central to manage a KIE Server.
If you use KIE Server, you do not need to use the Core Engine API. For information about deploying and managing processes on a KIE Server, see Packaging and deploying an IBM Business Automation Manager Open Editions project.
For the full reference information for all public process engine API calls, see the Java documentation. Other API classes also exist in the code, but they are internal APIs that can be changed in later versions. Use public APIs in applications that you develop and maintain.
KIE base and KIE session
A KIE base contains a reference to all process definitions and other assets relevant for a process. The engine uses this KIE base to look up all information for the process, or for several processes, whenever necessary.
You can load assets into a KIE base from various sources, such as a class path, file system, or process repository. Creating a KIE base is a resource-heavy operation, as it involves loading and parsing assets from various sources. You can dynamically modify the KIE base to add or remove process definitions and other assets at run time.
After you create a KIE base, you can instantiate a KIE session based on this KIE base. Use this KIE session to run processes based on the definitions in the KIE base.
When you use the KIE session to start a process, a new process instance is created. This instance maintains a specific process state. Different instances in the same KIE session can use the same process definition but have different states.
For example, if you develop an application to process sales orders, you can create one or more process definitions that determine how an order should be processed. When starting the application, you first need to create a KIE base that contains those process definitions. You can then create a session based on this KIE base. When a new sales order comes in, start a new process instance for the order. This process instance contains the state of the process for the specific sales request.
You can create many KIE sessions for the same KIE base and you can create many instances of the process within the same KIE session. Creating a KIE session, and also creating a process instance within the KIE session, uses far fewer resources than creating a KIE base. If you modify a KIE base, all the KIE sessions that use it can use the modifications automatically.
In most simple use cases, you can use a single KIE session to execute all processes. You can also use several sessions if needed. For example, if you want order processing for different customers to be completely independent, you can create a KIE session for each customer. You can also use multiple sessions for scalability reasons.
In typical applications you do not need to create a KIE base or KIE session directly. However, when you use other levels of the process engine API, you can interact with elements of the API that this level defines.
KIE base
The KIE base includes all process definitions and other assets that your application might need to execute a business process.
To create a KIE base, use a KieHelper
instance to load processes from various resources, such as the class path or the file system, and to create a new KIE base.
The following code snippet shows how to create a KIE base consisting of only one process definition, which is loaded from the class path.
KieHelper kieHelper = new KieHelper();
KieBase kieBase = kieHelper
.addResource(ResourceFactory.newClassPathResource("MyProcess.bpmn"))
.build();
The ResourceFactory
class has similar methods to load resources from a file, a URL, an InputStream, a Reader, and other sources.
Note
|
This "manual" process of creating a KIE base is simpler than other alternatives, but can make an application hard to maintain. Use other methods of creating a KIE base, such as the |
KIE session
After creating and loading the KIE base, you can create a KIE session to interact with the process engine. You can use this session to start and manage processes and to signal events.
The following code snippet creates a session based on the KIE base that you created previously and then starts a process instance, referencing the ID in the process definition.
KieSession ksession = kbase.newKieSession();
ProcessInstance processInstance = ksession.startProcess("com.sample.MyProcess");
ProcessRuntime interface
The KieSession
class exposes the ProcessRuntime
interface, which defines all the session methods for interacting with processes, as the following definition shows.
ProcessRuntime
interface /**
* Start a new process instance. Use the process (definition) that
* is referenced by the given process ID.
*
* @param processId The ID of the process to start
* @return the ProcessInstance that represents the instance of the process that was started
*/
ProcessInstance startProcess(String processId);
/**
* Start a new process instance. Use the process (definition) that
* is referenced by the given process ID. You can pass parameters
* to the process instance as name-value pairs, and these parameters set
* variables of the process instance.
*
* @param processId the ID of the process to start
* @param parameters the process variables to set when starting the process instance
* @return the ProcessInstance that represents the instance of the process that was started
*/
ProcessInstance startProcess(String processId,
Map<String, Object> parameters);
/**
* Signals the process engine that an event has occurred. The type parameter defines
* the type of event and the event parameter can contain additional information
* related to the event. All process instances that are listening to this type
* of (external) event will be notified. For performance reasons, use this type of
* event signaling only if one process instance must be able to notify
* other process instances. For internal events within one process instance, use the
* signalEvent method that also include the processInstanceId of the process instance
* in question.
*
* @param type the type of event
* @param event the data associated with this event
*/
void signalEvent(String type,
Object event);
/**
* Signals the process instance that an event has occurred. The type parameter defines
* the type of event and the event parameter can contain additional information
* related to the event. All node instances inside the given process instance that
* are listening to this type of (internal) event will be notified. Note that the event
* will only be processed inside the given process instance. All other process instances
* waiting for this type of event will not be notified.
*
* @param type the type of event
* @param event the data associated with this event
* @param processInstanceId the id of the process instance that should be signaled
*/
void signalEvent(String type,
Object event,
long processInstanceId);
/**
* Returns a collection of currently active process instances. Note that only process
* instances that are currently loaded and active inside the process engine are returned.
* When using persistence, it is likely not all running process instances are loaded
* as their state is stored persistently. It is best practice not to use this
* method to collect information about the state of your process instances but to use
* a history log for that purpose.
*
* @return a collection of process instances currently active in the session
*/
Collection<ProcessInstance> getProcessInstances();
/**
* Returns the process instance with the given ID. Note that only active process instances
* are returned. If a process instance has been completed already, this method returns
* null.
*
* @param id the ID of the process instance
* @return the process instance with the given ID, or null if it cannot be found
*/
ProcessInstance getProcessInstance(long processInstanceId);
/**
* Aborts the process instance with the given ID. If the process instance has been completed
* (or aborted), or if the process instance cannot be found, this method will throw an
* IllegalArgumentException.
*
* @param id the ID of the process instance
*/
void abortProcessInstance(long processInstanceId);
/**
* Returns the WorkItemManager related to this session. This object can be used to
* register new WorkItemHandlers or to complete (or abort) WorkItems.
*
* @return the WorkItemManager related to this session
*/
WorkItemManager getWorkItemManager();
Correlation Keys
When working with processes, you might need to assign a business identifier to a process instance and then use the identifier to reference the instance without storing the generated instance ID.
To provide such capabilities, the process engine uses the CorrelationKey
interface, which can define CorrelationProperties
. A class that implements CorrelationKey
can have either a single property describing it or a multi-property set. The value of the property or a combination of values of several properties refers to a unique instance.
The KieSession
class implements the CorrelationAwareProcessRuntime
interface to support correlation capabilities. This interface exposes the following methods:
CorrelationAwareProcessRuntime
interface /**
* Start a new process instance. Use the process (definition) that
* is referenced by the given process ID. You can pass parameters
* to the process instance (as name-value pairs), and these parameters set
* variables of the process instance.
*
* @param processId the ID of the process to start
* @param correlationKey custom correlation key that can be used to identify the process instance
* @param parameters the process variables to set when starting the process instance
* @return the ProcessInstance that represents the instance of the process that was started
*/
ProcessInstance startProcess(String processId, CorrelationKey correlationKey, Map<String, Object> parameters);
/**
* Create a new process instance (but do not yet start it). Use the process
* (definition) that is referenced by the given process ID.
* You can pass to the process instance (as name-value pairs),
* and these parameters set variables of the process instance.
* Use this method if you need a reference to the process instance before actually
* starting it. Otherwise, use startProcess.
*
* @param processId the ID of the process to start
* @param correlationKey custom correlation key that can be used to identify the process instance
* @param parameters the process variables to set when creating the process instance
* @return the ProcessInstance that represents the instance of the process that was created (but not yet started)
*/
ProcessInstance createProcessInstance(String processId, CorrelationKey correlationKey, Map<String, Object> parameters);
/**
* Returns the process instance with the given correlationKey. Note that only active process instances
* are returned. If a process instance has been completed already, this method will return
* null.
*
* @param correlationKey the custom correlation key assigned when the process instance was created
* @return the process instance identified by the key or null if it cannot be found
*/
ProcessInstance getProcessInstance(CorrelationKey correlationKey);
Correlation is usually used with long-running processes. You must enable persistence if you want to store correlation information permanently.
Runtime manager
The RuntimeManager
class provides a layer in the process engine API that simplifies and empowers its usage. This class encapsulates and manages the KIE base and KIE session, as well as the task service that provides handlers for all tasks in the process. The KIE session and the task service within the runtime manager are already configured to work with each other and you do not need to provide such configuration. For example, you do not need to register a human task handler and to ensure that it is connected to the required service.
The runtime manager manages the KIE session according to a predefined strategy. The following strategies are available:
-
Singleton: The runtime manager maintains a single
KieSession
and uses it for all the requested processes. -
Per Request: The runtime manager creates a new
KieSession
for every request. -
Per Process Instance: The runtime manager maintains mapping between process instance and
KieSession
and always provides the sameKieSession
whenever working with a given process instance.
Regardless of the strategy, the RuntimeManager
class ensures the same capabilities in initialization and configuration of the process engine components:
-
KieSession
instances are loaded with the same factories (either in memory or JPA based). -
Work item handlers are registered on every
KieSession
instance (either loaded from the database or newly created). -
Event listeners (
Process
,Agenda
,WorkingMemory
) are registered on every KIE session, whether the session is loaded from the database or newly created. -
The task service is configured with the following required components:
-
The JTA transaction manager
-
The same entity manager factory as the one used for
KieSession
instances -
The
UserGroupCallback
instance that can be configured in the environment
-
The runtime manager also enables disposing the process engine cleanly. It provides dedicated methods to dispose a RuntimeEngine
instance when it is no longer needed, releasing any resources it might have acquired.
The following code shows the definition of the RuntimeManager
interface:
RuntimeManager
interfacepublic interface RuntimeManager {
/**
* Returns a <code>RuntimeEngine</code> instance that is fully initialized:
* <ul>
* <li>KieSession is created or loaded depending on the strategy</li>
* <li>TaskService is initialized and attached to the KIE session (through a listener)</li>
* <li>WorkItemHandlers are initialized and registered on the KIE session</li>
* <li>EventListeners (process, agenda, working memory) are initialized and added to the KIE session</li>
* </ul>
* @param context the concrete implementation of the context that is supported by given <code>RuntimeManager</code>
* @return instance of the <code>RuntimeEngine</code>
*/
RuntimeEngine getRuntimeEngine(Context<?> context);
/**
* Unique identifier of the <code>RuntimeManager</code>
* @return
*/
String getIdentifier();
/**
* Disposes <code>RuntimeEngine</code> and notifies all listeners about that fact.
* This method should always be used to dispose <code>RuntimeEngine</code> that is not needed
* anymore. <br/>
* Do not use KieSession.dispose() used with RuntimeManager as it will break the internal
* mechanisms of the manager responsible for clear and efficient disposal.<br/>
* Disposing is not needed if <code>RuntimeEngine</code> was obtained within an active JTA transaction,
* if the getRuntimeEngine method was invoked during active JTA transaction, then disposing of
* the runtime engine will happen automatically on transaction completion.
* @param runtime
*/
void disposeRuntimeEngine(RuntimeEngine runtime);
/**
* Closes <code>RuntimeManager</code> and releases its resources. Call this method when
* a runtime manager is not needed anymore. Otherwise it will still be active and operational.
*/
void close();
}
The RuntimeManager
class also provides the RuntimeEngine
class, which includes methods to get access to underlying process engine components:
RuntimeEngine
interfacepublic interface RuntimeEngine {
/**
* Returns the <code>KieSession</code> configured for this <code>RuntimeEngine</code>
* @return
*/
KieSession getKieSession();
/**
* Returns the <code>TaskService</code> configured for this <code>RuntimeEngine</code>
* @return
*/
TaskService getTaskService();
}
Note
|
An identifier of the The same If you don’t specify an identifier when creating a If you maintain multiple runtime managers in your application, you must specify a unique identifier for every For example, the deployment service maintains multiple runtime managers and uses the GAV value of the KJAR file as an identifier. The same logic is used in Business Central and in KIE Server, because they depend on the deployment service. |
Note
|
When you need to interact with the process engine or task service from within a handler or a listener, you can use the |
Runtime manager strategies
The RuntimeManager
class supports the following strategies for managing KIE sessions.
- Singleton strategy
-
This strategy instructs the runtime manager to maintain a single
RuntimeEngine
instance (and in turn singleKieSession
andTaskService
instances). Access to the runtime engine is synchronized and, therefore, thread safe, although it comes with a performance penalty due to synchronization.Use this strategy for simple use cases.
This strategy has the following characteristics:
-
It has a small memory footprint, with single instances of the runtime engine and the task service.
-
It is simple and compact in design and usage.
-
It is a good fit for low-to-medium load on the process engine because of synchronized access.
-
In this strategy, because of the single
KieSession
instance, all state objects (such as facts) are directly visible to all process instances and vice versa. -
The strategy is not contextual. When you retrieve instances of
RuntimeEngine
from a singletonRuntimeManager
, you do not need to take theContext
instance into account. Usually, you can useEmptyContext.get()
as the context, although a null argument is acceptable as well. -
In this strategy, the runtime manager keeps track of the ID of the
KieSession
, so that the same session remains in use after aRuntimeManager
restart. The ID is stored as a serialized file in a temporary location in the file system that, depending on the environment, can be one of the following directories:-
The value of the
jbpm.data.dir
system property -
The value of the
jboss.server.data.dir
system property -
The value of the
java.io.tmpdir
system property
-
WarningA combination of the Singleton strategy and the EJB Timer Scheduler might raise Hibernate issues under load. Do not use this combination in production applications. The EJB Timer Scheduler is the default scheduler in KIE Server.
-
- Per request strategy
-
This strategy instructs the runtime manager to provide a new instance of
RuntimeEngine
for every request. One or more invocations of the process engine within a single transaction are considered a single request.The same instance of
RuntimeEngine
must be used within a single transaction to ensure correctness of state. Otherwise, an operation completed in one call would not be visible in the next call.This strategy is stateless, as process state is preserved only within the request. When a request is completed, the
RuntimeEngine
instance is permanently destroyed. If persistence is used, information related to the KIE session is removed from the persistence database as well.This strategy has the following characteristics:
-
It provides completely isolated process engine and task service operations for every request.
-
It is completely stateless, because facts are stored only for the duration of the request.
-
It is a good fit for high-load, stateless processes, where no facts or timers must be preserved between requests.
-
In this strategy, the KIE session is only available during the life of a request and is destroyed at the end of the request.
-
The strategy is not contextual. When you retrieve instances of
RuntimeEngine
from a per-requestRuntimeManager
, you do not need to take theContext
instance into account. Usually, you can useEmptyContext.get()
as the context, although a null argument is acceptable as well.
-
- Per process instance strategy
-
This strategy instructs
RuntimeManager
to maintain a strict relationship between a KIE session and a process instance. EachKieSession
is available as long as theProcessInstance
to which it belongs is active.This strategy provides the most flexible approach for using advanced capabilities of the process engine, such as rule evaluation and isolation between process instances. It maximizes performance and reduces potential bottlenecks introduced by synchronization. At the same time, unlike the request strategy, it reduces the number of KIE sessions to the actual number of process instances, rather than the total number of requests.
This strategy has the following characteristics:
-
It provides isolation for every process instance.
-
It maintains a strict relationship between
KieSession
andProcessInstance
to ensure that it always delivers the sameKieSession
for a givenProcessInstance
. -
It merges the lifecycle of
KieSession
withProcessInstance
, and both are disposed when the process instance completes or aborts. -
It enables maintenance of data, such as facts and timers, in the scope of the process instance. Only the process instance has access to the data.
-
It introduces some overhead because of the need to look up and load the
KieSession
for the process instance. -
It validates every usage of a
KieSession
so it cannot be used for other process instances. An exception is thrown if another process instance uses the sameKieSession
. -
The strategy is contextual and accepts the following context instances:
-
EmptyContext
or null: Used when starting a process instance because no process instance ID is available yet -
ProcessInstanceIdContext
: Used after the process instance is created -
CorrelationKeyContext
: Used as an alternative toProcessInstanceIdContext
to use a custom (business) key instead of the process instance ID
-
-
Typical usage scenario for the runtime manager
The typical usage scenario for the runtime manager consists of the following stages:
-
At application startup time, complete the following stage:
-
Build a
RuntimeManager
instance and keep it for the entire lifetime of the application, as it is thread-safe and can be accessed concurrently.
-
-
At request time, complete the following stages:
-
Get
RuntimeEngine
from theRuntimeManager
, using the proper context instance as determined by the strategy that you configured for theRuntimeManager
class. -
Get the
KieSession
andTaskService
objects from theRuntimeEngine
. -
Use the
KieSession
andTaskService
objects for operations such asstartProcess
orcompleteTask
. -
After completing processing, dispose
RuntimeEngine
using theRuntimeManager.disposeRuntimeEngine
method.
-
-
At application shutdown time, complete the following stage:
-
Close the
RuntimeManager
instance.
-
Note
|
When |
The following example shows how you can build a RuntimeManager
instance and get a RuntimeEngine
instance (that encapsulates KieSession
and TaskService
classes) from it:
RuntimeManager
instance and then getting RuntimeEngine
and KieSession
// First, configure the environment to be used by RuntimeManager
RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get()
.newDefaultInMemoryBuilder()
.addAsset(ResourceFactory.newClassPathResource("BPMN2-ScriptTask.bpmn2"), ResourceType.BPMN2)
.get();
// Next, create the RuntimeManager - in this case the singleton strategy is chosen
RuntimeManager manager = RuntimeManagerFactory.Factory.get().newSingletonRuntimeManager(environment);
// Then get RuntimeEngine from the runtime manager, using an empty context because singleton does not keep track
// of runtime engine as there is only one
RuntimeEngine runtime = manager.getRuntimeEngine(EmptyContext.get());
// Get the KieSession from the RuntimeEngine - already initialized with all handlers, listeners, and other requirements
// configured on the environment
KieSession ksession = runtimeEngine.getKieSession();
// Add invocations of the process engine here,
// for example, ksession.startProcess(processId);
// Finally, dispose the runtime engine
manager.disposeRuntimeEngine(runtimeEngine);
This example provides the simplest, or minimal, way of using RuntimeManager
and RuntimeEngine
classes. It has the following characteristics:
-
The
KieSession
instance is created in memory, using thenewDefaultInMemoryBuilder
builder. -
A single process, which is added as an asset, is available for execution.
-
The
TaskService
class is configured and attached to theKieSession
instance through theLocalHTWorkItemHandler
interface to support user task capabilities within processes.
Runtime environment configuration object
The RuntimeManager
class encapsulates internal process engine complexity, such as creating, disposing, and registering handlers.
It also provides fine-grained control over process engine configuration. To set this configuration, you must create a RuntimeEnvironment
object and then use it to create the RuntimeManager
object.
The following definition shows the methods available in the RuntimeEnvironment
interface:
RuntimeEnvironment
interface public interface RuntimeEnvironment {
/**
* Returns <code>KieBase</code> that is to be used by the manager
* @return
*/
KieBase getKieBase();
/**
* KieSession environment that is to be used to create instances of <code>KieSession</code>
* @return
*/
Environment getEnvironment();
/**
* KieSession configuration that is to be used to create instances of <code>KieSession</code>
* @return
*/
KieSessionConfiguration getConfiguration();
/**
* Indicates if persistence is to be used for the KieSession instances
* @return
*/
boolean usePersistence();
/**
* Delivers a concrete implementation of <code>RegisterableItemsFactory</code> to obtain handlers and listeners
* that is to be registered on instances of <code>KieSession</code>
* @return
*/
RegisterableItemsFactory getRegisterableItemsFactory();
/**
* Delivers a concrete implementation of <code>UserGroupCallback</code> that is to be registered on instances
* of <code>TaskService</code> for managing users and groups.
* @return
*/
UserGroupCallback getUserGroupCallback();
/**
* Delivers a custom class loader that is to be used by the process engine and task service instances
* @return
*/
ClassLoader getClassLoader();
/**
* Closes the environment, permitting closing of all dependent components such as ksession factories
*/
void close();
Runtime environment builder
To create an instance of RuntimeEnvironment
that contains the required data, use the RuntimeEnvironmentBuilder
class. This class provides a fluent API to configure a RuntimeEnvironment
instance with predefined settings.
The following definition shows the methods in the RuntimeEnvironmentBuilder
interface:
RuntimeEnvironmentBuilder
interfacepublic interface RuntimeEnvironmentBuilder {
public RuntimeEnvironmentBuilder persistence(boolean persistenceEnabled);
public RuntimeEnvironmentBuilder entityManagerFactory(Object emf);
public RuntimeEnvironmentBuilder addAsset(Resource asset, ResourceType type);
public RuntimeEnvironmentBuilder addEnvironmentEntry(String name, Object value);
public RuntimeEnvironmentBuilder addConfiguration(String name, String value);
public RuntimeEnvironmentBuilder knowledgeBase(KieBase kbase);
public RuntimeEnvironmentBuilder userGroupCallback(UserGroupCallback callback);
public RuntimeEnvironmentBuilder registerableItemsFactory(RegisterableItemsFactory factory);
public RuntimeEnvironment get();
public RuntimeEnvironmentBuilder classLoader(ClassLoader cl);
public RuntimeEnvironmentBuilder schedulerService(Object globalScheduler);
Use the RuntimeEnvironmentBuilderFactory
class to obtain instances of RuntimeEnvironmentBuilder
. Along with empty instances with no settings, you can get builders with several preconfigured sets of configuration options for the runtime manager.
The following definition shows the methods in the RuntimeEnvironmentBuilderFactory
interface:
RuntimeEnvironmentBuilderFactory
interfacepublic interface RuntimeEnvironmentBuilderFactory {
/**
* Provides a completely empty <code>RuntimeEnvironmentBuilder</code> instance to manually
* set all required components instead of relying on any defaults.
* @return new instance of <code>RuntimeEnvironmentBuilder</code>
*/
public RuntimeEnvironmentBuilder newEmptyBuilder();
/**
* Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
* <ul>
* <li>DefaultRuntimeEnvironment</li>
* </ul>
* @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
*
* @see DefaultRuntimeEnvironment
*/
public RuntimeEnvironmentBuilder newDefaultBuilder();
/**
* Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
* <ul>
* <li>DefaultRuntimeEnvironment</li>
* </ul>
* but does not have persistence for the process engine configured so it will only store process instances in memory
* @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
*
* @see DefaultRuntimeEnvironment
*/
public RuntimeEnvironmentBuilder newDefaultInMemoryBuilder();
/**
* Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
* <ul>
* <li>DefaultRuntimeEnvironment</li>
* </ul>
* This method is tailored to work smoothly with KJAR files
* @param groupId group id of kjar
* @param artifactId artifact id of kjar
* @param version version number of kjar
* @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
*
* @see DefaultRuntimeEnvironment
*/
public RuntimeEnvironmentBuilder newDefaultBuilder(String groupId, String artifactId, String version);
/**
* Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
* <ul>
* <li>DefaultRuntimeEnvironment</li>
* </ul>
* This method is tailored to work smoothly with KJAR files and use the kbase and ksession settings in the KJAR
* @param groupId group id of kjar
* @param artifactId artifact id of kjar
* @param version version number of kjar
* @param kbaseName name of the kbase defined in kmodule.xml stored in kjar
* @param ksessionName name of the ksession define in kmodule.xml stored in kjar
* @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
*
* @see DefaultRuntimeEnvironment
*/
public RuntimeEnvironmentBuilder newDefaultBuilder(String groupId, String artifactId, String version, String kbaseName, String ksessionName);
/**
* Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
* <ul>
* <li>DefaultRuntimeEnvironment</li>
* </ul>
* This method is tailored to work smoothly with KJAR files and use the release ID defined in the KJAR
* @param releaseId <code>ReleaseId</code> that described the kjar
* @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
*
* @see DefaultRuntimeEnvironment
*/
public RuntimeEnvironmentBuilder newDefaultBuilder(ReleaseId releaseId);
/**
* Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
* <ul>
* <li>DefaultRuntimeEnvironment</li>
* </ul>
* This method is tailored to work smoothly with KJAR files and use the kbase, ksession, and release ID settings in the KJAR
* @param releaseId <code>ReleaseId</code> that described the kjar
* @param kbaseName name of the kbase defined in kmodule.xml stored in kjar
* @param ksessionName name of the ksession define in kmodule.xml stored in kjar
* @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
*
* @see DefaultRuntimeEnvironment
*/
public RuntimeEnvironmentBuilder newDefaultBuilder(ReleaseId releaseId, String kbaseName, String ksessionName);
/**
* Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
* <ul>
* <li>DefaultRuntimeEnvironment</li>
* </ul>
* It relies on KieClasspathContainer that requires the presence of kmodule.xml in the META-INF folder which
* defines the kjar itself.
* Expects to use default kbase and ksession from kmodule.
* @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
*
* @see DefaultRuntimeEnvironment
*/
public RuntimeEnvironmentBuilder newClasspathKmoduleDefaultBuilder();
/**
* Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
* <ul>
* <li>DefaultRuntimeEnvironment</li>
* </ul>
* It relies on KieClasspathContainer that requires the presence of kmodule.xml in the META-INF folder which
* defines the kjar itself.
* @param kbaseName name of the kbase defined in kmodule.xml
* @param ksessionName name of the ksession define in kmodule.xml
* @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
*
* @see DefaultRuntimeEnvironment
*/
public RuntimeEnvironmentBuilder newClasspathKmoduleDefaultBuilder(String kbaseName, String ksessionName);
The runtime manager also provides access to a TaskService
object as an integrated component of a RuntimeEngine
object, configured to communicate with the KIE session. If you use one of the default builders, the following configuration settings for the task service are present:
-
The persistence unit name is set to
org.jbpm.persistence.jpa
(for both process engine and task service). -
The human task handler is registered on the KIE session.
-
The JPA-based history log event listener is registered on the KIE session.
-
An event listener to trigger rule task evaluation (
fireAllRules
) is registered on the KIE session.
Registration of handlers and listeners for runtime engines
If you use the runtime manager API, the runtime engine object represents the process engine.
To extend runtime engines with your own handlers or listeners, you can implement the RegisterableItemsFactory
interface and then include it in the runtime environment using the RuntimeEnvironmentBuilder.registerableItemsFactory()
method. Then the runtime manager automatically adds the handlers or listeners to every runtime engine it creates.
The following definition shows the methods in the RegisterableItemsFactory
interface:
RegisterableItemsFactory
interface /**
* Returns new instances of <code>WorkItemHandler</code> that will be registered on <code>RuntimeEngine</code>
* @param runtime provides <code>RuntimeEngine</code> in case handler need to make use of it internally
* @return map of handlers to be registered - in case of no handlers empty map shall be returned.
*/
Map<String, WorkItemHandler> getWorkItemHandlers(RuntimeEngine runtime);
/**
* Returns new instances of <code>ProcessEventListener</code> that will be registered on <code>RuntimeEngine</code>
* @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally
* @return list of listeners to be registered - in case of no listeners empty list shall be returned.
*/
List<ProcessEventListener> getProcessEventListeners(RuntimeEngine runtime);
/**
* Returns new instances of <code>AgendaEventListener</code> that will be registered on <code>RuntimeEngine</code>
* @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally
* @return list of listeners to be registered - in case of no listeners empty list shall be returned.
*/
List<AgendaEventListener> getAgendaEventListeners(RuntimeEngine runtime);
/**
* Returns new instances of <code>WorkingMemoryEventListener</code> that will be registered on <code>RuntimeEngine</code>
* @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally
* @return list of listeners to be registered - in case of no listeners empty list shall be returned.
*/
List<WorkingMemoryEventListener> getWorkingMemoryEventListeners(RuntimeEngine runtime);
The process engine provides default implementations of RegisterableItemsFactory
. You can extend these implementations to define custom handlers and listeners.
The following available implementations might be useful:
-
org.jbpm.runtime.manager.impl.SimpleRegisterableItemsFactory
: The simplest possible implementation. It does not have any predefined content and uses reflection to produce instances of handlers and listeners based on given class names. -
org.jbpm.runtime.manager.impl.DefaultRegisterableItemsFactory
: An extension of the Simple implementation that introduces the same defaults as the default runtime environment builder and still provides the same capabilities as the Simple implementation. -
org.jbpm.runtime.manager.impl.cdi.InjectableRegisterableItemsFactory
: An extension of the Default implementation that is tailored for CDI environments and provides a CDI style approach to finding handlers and listeners using producers.
Registering work item handlers using a file
You can register simple work item handlers, which are stateless or rely on the KieSession
state, by defining them in the CustomWorkItem.conf
file and placing the file on the class path.
-
Create a file named
drools.session.conf
in theMETA-INF
subdirectory of the root of the class path. For web applications the directory isWEB-INF/classes/META-INF
. -
Add the following line to the
drools.session.conf
file:drools.workItemHandlers = CustomWorkItemHandlers.conf
-
Create a file named
CustomWorkItemHandlers.conf
in the same directory. -
In the
CustomWorkItemHandlers.conf
file, define custom work item handlers using the MVEL style, similar to the following example:[ "Log": new org.jbpm.process.instance.impl.demo.SystemOutWorkItemHandler(), "WebService": new org.jbpm.process.workitem.webservice.WebServiceWorkItemHandler(ksession), "Rest": new org.jbpm.process.workitem.rest.RESTWorkItemHandler(), "Service Task" : new org.jbpm.process.workitem.bpmn2.ServiceTaskHandler(ksession) ]
The work item handlers that you listed are registered for any KIE session created by the application, regardless of whether the application uses the runtime manager API.
Registration of handlers and listeners in a CDI environment
If your application uses the runtime manager API and runs in a CDI environment, your classes can implement the dedicated producer interfaces to provide custom work item handlers and event listeners to all runtime engines.
To create a work item handler, you must implement the WorkItemHandlerProducer
interface.
WorkItemHandlerProducer
interfacepublic interface WorkItemHandlerProducer {
/**
* Returns a map of work items (key = work item name, value= work item handler instance)
* to be registered on the KieSession
* <br/>
* The following parameters are accepted:
* <ul>
* <li>ksession</li>
* <li>taskService</li>
* <li>runtimeManager</li>
* </ul>
*
* @param identifier - identifier of the owner - usually RuntimeManager that allows the producer to filter out
* and provide valid instances for given owner
* @param params - the owner might provide some parameters, usually KieSession, TaskService, RuntimeManager instances
* @return map of work item handler instances (recommendation is to always return new instances when this method is invoked)
*/
Map<String, WorkItemHandler> getWorkItemHandlers(String identifier, Map<String, Object> params);
}
To create an event listener, you must implement the EventListenerProducer
interface. Annotate the event listener producer with the proper qualifier to indicate the type of listeners that it provides. Use one of the following annotations:
-
@Process
forProcessEventListener
-
@Agenda
forAgendaEventListener
-
@WorkingMemory
forWorkingMemoryEventListener
EventListenerProducer
interfacepublic interface EventListenerProducer<T> {
/**
* Returns a list of instances for given (T) type of listeners
* <br/>
* The following parameters are accepted:
* <ul>
* <li>ksession</li>
* <li>taskService</li>
* <li>runtimeManager</li>
* </ul>
* @param identifier - identifier of the owner - usually RuntimeManager that allows the producer to filter out
* and provide valid instances for given owner
* @param params - the owner might provide some parameters, usually KieSession, TaskService, RuntimeManager instances
* @return list of listener instances (recommendation is to always return new instances when this method is invoked)
*/
List<T> getEventListeners(String identifier, Map<String, Object> params);
}
Package your implementations of these interfaces as a bean archive by including beans.xml
in the META-INF
subdirectory. Place the bean archive on the application class path, for example, in WEB-INF/lib
for a web application. The CDI-based runtime manager discovers the packages and registers the work item handlers and event listeners in every KieSession
that it creates or loads from the data store.
The process engine provides certain parameters to the producers to enable stateful and advanced operation. For example, the handlers or listeners can use the parameters to signal the process engine or the process instance in case of an error. The process engine provides the following components as parameters:
-
KieSession
-
TaskService
-
RuntimeManager
In addition, the identifier of the RuntimeManager
class instance is provided as a parameter. You can apply filtering to the identifier to decide whether this RuntimeManager
instance receives the handlers and listeners.
Services in the process engine
The process engine provides a set of high-level services, running on top of the runtime manager API.
The services provide the most convenient way to embed the process engine in your application. KIE Server also uses these services internally.
When you use services, you do not need to implement your own handling of the runtime manager, runtime engines, sessions, and other process engine entities. However, you can access the underlying RuntimeManager
objects through the services when necessary.
Note
|
If you use the EJB remote client for the services API, the |
Modules for process engine services
The process engine services are provided as a set of modules. These modules are grouped by their framework dependencies. You can choose the suitable modules and use only these modules, without making your application dependent on the frameworks that other modules use.
The following modules are available:
-
jbpm-services-api
: Only API classes and interfaces -
jbpm-kie-services
: A code implementation of the services API in pure Java without any framework dependencies -
jbpm-services-cdi
: A CDI wrapper on top of the core services implementation -
jbpm-services-ejb-api
: An extension of the services API to support EJB requirements -
jbpm-services-ejb-impl
: EJB wrappers on top of the core services implementation -
jbpm-services-ejb-timer
: A scheduler service based on the EJB timer service to support time-based operations, such as timer events and deadlines -
jbpm-services-ejb-client
: An EJB remote client implementation, currently supporting only Red Hat JBoss EAP
Deployment service
The deployment service deploys and undeploys units in the process engine.
A deployment unit represents the contents of a KJAR file. A deployment unit includes business assets, such as process definitions, rules, forms, and data models. After deploying the unit you can execute the processes it defines. You can also query the available deployment units.
Every deployment unit has a unique identifier string, deploymentId
, also known as deploymentUnitId
. You can use this identifier to apply any service actions to the deployment unit.
In a typical use case for this service, you can load and unload multiple KJARs at the same time and, when necessary, execute processes simultaneously.
The following code sample shows simple use of the deployment service.
// Create deployment unit by providing the GAV of the KJAR
DeploymentUnit deploymentUnit = new KModuleDeploymentUnit(GROUP_ID, ARTIFACT_ID, VERSION);
// Get the deploymentId for the deployed unit
String deploymentId = deploymentUnit.getIdentifier();
// Deploy the unit
deploymentService.deploy(deploymentUnit);
// Retrieve the deployed unit
DeployedUnit deployed = deploymentService.getDeployedUnit(deploymentId);
// Get the runtime manager
RuntimeManager manager = deployed.getRuntimeManager();
The following definition shows the complete DeploymentService
interface:
DeploymentService
interfacepublic interface DeploymentService {
void deploy(DeploymentUnit unit);
void undeploy(DeploymentUnit unit);
RuntimeManager getRuntimeManager(String deploymentUnitId);
DeployedUnit getDeployedUnit(String deploymentUnitId);
Collection<DeployedUnit> getDeployedUnits();
void activate(String deploymentId);
void deactivate(String deploymentId);
boolean isDeployed(String deploymentUnitId);
}
Definition service
When you deploy a process definition using the deployment service, the definition service automatically scans the definition, parses the process, and extracts the information that the process engine requires.
You can use the definition service API to retrieve information about the process definition. The service extracts this information directly from the BPMN2 process definition. The following information is available:
-
Process definition such as ID, name, and description
-
Process variables including the name and type of every variable
-
Reusable sub-processes used in the process (if any)
-
Service tasks that represent domain-specific activities
-
User tasks including assignment information
-
Task data with input and output information
The following code sample shows simple use of the definition service. The processID
must correspond to the ID of a process definition in a KJAR file that you already deployed using the deployment service.
String processId = "org.jbpm.writedocument";
Collection<UserTaskDefinition> processTasks =
bpmn2Service.getTasksDefinitions(deploymentUnit.getIdentifier(), processId);
Map<String, String> processData =
bpmn2Service.getProcessVariables(deploymentUnit.getIdentifier(), processId);
Map<String, String> taskInputMappings =
bpmn2Service.getTaskInputMappings(deploymentUnit.getIdentifier(), processId, "Write a Document" );
You can also use the definition service to scan a definition that you provide as BPMN2-compliant XML content, without the use of a KJAR file. The buildProcessDefinition
method provides this capability.
The following definition shows the complete DefinitionService
interface:
DefinitionService
interfacepublic interface DefinitionService {
ProcessDefinition buildProcessDefinition(String deploymentId, String bpmn2Content, ClassLoader classLoader, boolean cache) throws IllegalArgumentException;
ProcessDefinition getProcessDefinition(String deploymentId, String processId);
Collection<String> getReusableSubProcesses(String deploymentId, String processId);
Map<String, String> getProcessVariables(String deploymentId, String processId);
Map<String, String> getServiceTasks(String deploymentId, String processId);
Map<String, Collection<String>> getAssociatedEntities(String deploymentId, String processId);
Collection<UserTaskDefinition> getTasksDefinitions(String deploymentId, String processId);
Map<String, String> getTaskInputMappings(String deploymentId, String processId, String taskName);
Map<String, String> getTaskOutputMappings(String deploymentId, String processId, String taskName);
}
Process service
The deployment and definition services prepare process data in the process engine. To execute processes based on this data, use the process service. The process service supports interaction with the process engine execution environment, including the following actions:
-
Starting a new process instance
-
Running a process as a single transaction
-
Working with an existing process instance, for example, signalling events, getting information details, and setting values of variables
-
Working with work items
The process service is also a command executor. You can use it to execute commands on the KIE session to extend its capabilities.
Important
|
The process service is optimized for runtime operations. Use it when you need to run a process or to alter a process instance, for example, signal events or change variables. For read operations, for example, showing available process instances, use the runtime data service. |
The following code sample shows deploying and running a process:
KModuleDeploymentUnit deploymentUnit = new KModuleDeploymentUnit(GROUP_ID, ARTIFACT_ID, VERSION);
deploymentService.deploy(deploymentUnit);
long processInstanceId = processService.startProcess(deploymentUnit.getIdentifier(), "customtask");
ProcessInstance pi = processService.getProcessInstance(processInstanceId);
The startProcess
method expects deploymentId
as the first argument. Using this argument, you can start processes in a certain deployment when your application might have multiple deployments.
For example, you might deploy different versions of the same process from different KJAR files. You can then start the required version using the correct deploymentId
.
The following definition shows the complete ProcessService
interface:
ProcessService
interfacepublic interface ProcessService {
/**
* Starts a process with no variables
*
* @param deploymentId deployment identifier
* @param processId process identifier
* @return process instance IDentifier
* @throws RuntimeException in case of encountered errors
* @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist
* @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active
*/
Long startProcess(String deploymentId, String processId);
/**
* Starts a process and sets variables
*
* @param deploymentId deployment identifier
* @param processId process identifier
* @param params process variables
* @return process instance IDentifier
* @throws RuntimeException in case of encountered errors
* @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist
* @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active
*/
Long startProcess(String deploymentId, String processId, Map<String, Object> params);
/**
* Starts a process with no variables and assigns a correlation key
*
* @param deploymentId deployment identifier
* @param processId process identifier
* @param correlationKey correlation key to be assigned to the process instance - must be unique
* @return process instance IDentifier
* @throws RuntimeException in case of encountered errors
* @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist
* @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active
*/
Long startProcess(String deploymentId, String processId, CorrelationKey correlationKey);
/**
* Starts a process, sets variables, and assigns a correlation key
*
* @param deploymentId deployment identifier
* @param processId process identifier
* @param correlationKey correlation key to be assigned to the process instance - must be unique
* @param params process variables
* @return process instance IDentifier
* @throws RuntimeException in case of encountered errors
* @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist
* @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active
*/
Long startProcess(String deploymentId, String processId, CorrelationKey correlationKey, Map<String, Object> params);
/**
* Run a process that is designed to start and finish in a single transaction.
* This method starts the process and returns when the process completes.
* It returns the state of process variables at the outcome of the process
*
* @param deploymentId deployment identifier for the KJAR file of the process
* @param processId process identifier
* @param params process variables
* @return the state of process variables at the end of the process
*/
Map<String, Object> computeProcessOutcome(String deploymentId, String processId, Map<String, Object> params);
/**
* Starts a process at the listed nodes, instead of the normal starting point.
* This method can be used for restarting a process that was aborted. However,
* it does not restore the context of a previous process instance. You must
* supply all necessary variables when calling this method.
* This method does not guarantee that the process is started in a valid state.
*
* @param deploymentId deployment identifier
* @param processId process identifier
* @param params process variables
* @param nodeIds list of BPMN node identifiers where the process must start
* @return process instance IDentifier
* @throws RuntimeException in case of encountered errors
* @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist
* @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active
*/
Long startProcessFromNodeIds(String deploymentId, String processId, Map<String, Object> params, String... nodeIds);
/**
* Starts a process at the listed nodes, instead of the normal starting point,
* and assigns a correlation key.
* This method can be used for restarting a process that was aborted. However,
* it does not restore the context of a previous process instance. You must
* supply all necessary variables when calling this method.
* This method does not guarantee that the process is started in a valid state.
*
* @param deploymentId deployment identifier
* @param processId process identifier
* @param key correlation key (must be unique)
* @param params process variables
* @param nodeIds list of BPMN node identifiers where the process must start.
* @return process instance IDentifier
* @throws RuntimeException in case of encountered errors
* @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist
* @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active
*/
Long startProcessFromNodeIds(String deploymentId, String processId, CorrelationKey key, Map<String, Object> params, String... nodeIds);
/**
* Aborts the specified process
*
* @param processInstanceId process instance unique identifier
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
void abortProcessInstance(Long processInstanceId);
/**
* Aborts the specified process
*
* @param deploymentId deployment to which the process instance belongs
* @param processInstanceId process instance unique identifier
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
void abortProcessInstance(String deploymentId, Long processInstanceId);
/**
* Aborts all specified processes
*
* @param processInstanceIds list of process instance unique identifiers
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
void abortProcessInstances(List<Long> processInstanceIds);
/**
* Aborts all specified processes
*
* @param deploymentId deployment to which the process instance belongs
* @param processInstanceIds list of process instance unique identifiers
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
void abortProcessInstances(String deploymentId, List<Long> processInstanceIds);
/**
* Signals an event to a single process instance
*
* @param processInstanceId the process instance unique identifier
* @param signalName the ID of the signal in the process
* @param event the event object to be passed with the event
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
void signalProcessInstance(Long processInstanceId, String signalName, Object event);
/**
* Signals an event to a single process instance
*
* @param deploymentId deployment to which the process instance belongs
* @param processInstanceId the process instance unique identifier
* @param signalName the ID of the signal in the process
* @param event the event object to be passed with the event
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
void signalProcessInstance(String deploymentId, Long processInstanceId, String signalName, Object event);
/**
* Signal an event to a list of process instances
*
* @param processInstanceIds list of process instance unique identifiers
* @param signalName the ID of the signal in the process
* @param event the event object to be passed with the event
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
void signalProcessInstances(List<Long> processInstanceIds, String signalName, Object event);
/**
* Signal an event to a list of process instances
*
* @param deploymentId deployment to which the process instances belong
* @param processInstanceIds list of process instance unique identifiers
* @param signalName the ID of the signal in the process
* @param event the event object to be passed with the event
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
void signalProcessInstances(String deploymentId, List<Long> processInstanceIds, String signalName, Object event);
/**
* Signal an event to a single process instance by correlation key
*
* @param correlationKey the unique correlation key of the process instance
* @param signalName the ID of the signal in the process
* @param event the event object to be passed in with the event
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given key was not found
*/
void signalProcessInstanceByCorrelationKey(CorrelationKey correlationKey, String signalName, Object event);
/**
* Signal an event to a single process instance by correlation key
*
* @param deploymentId deployment to which the process instance belongs
* @param correlationKey the unique correlation key of the process instance
* @param signalName the ID of the signal in the process
* @param event the event object to be passed in with the event
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given key was not found
*/
void signalProcessInstanceByCorrelationKey(String deploymentId, CorrelationKey correlationKey, String signalName, Object event);
/**
* Signal an event to given list of correlation keys
*
* @param correlationKeys list of unique correlation keys of process instances
* @param signalName the ID of the signal in the process
* @param event the event object to be passed in with the event
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with one of the given keys was not found
*/
void signalProcessInstancesByCorrelationKeys(List<CorrelationKey> correlationKeys, String signalName, Object event);
/**
* Signal an event to given list of correlation keys
*
* @param deploymentId deployment to which the process instances belong
* @param correlationKeys list of unique correlation keys of process instances
* @param signalName the ID of the signal in the process
* @param event the event object to be passed in with the event
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with one of the given keys was not found
*/
void signalProcessInstancesByCorrelationKeys(String deploymentId, List<CorrelationKey> correlationKeys, String signalName, Object event);
/**
* Signal an event to a any process instance that listens to a given signal and belongs to a given deployment
*
* @param deployment identifier of the deployment
* @param signalName the ID of the signal in the process
* @param event the event object to be passed with the event
* @throws DeploymentNotFoundException in case the deployment unit was not found
*/
void signalEvent(String deployment, String signalName, Object event);
/**
* Returns process instance information. Will return null if no
* active process with the ID is found
*
* @param processInstanceId The process instance unique identifier
* @return Process instance information
* @throws DeploymentNotFoundException in case the deployment unit was not found
*/
ProcessInstance getProcessInstance(Long processInstanceId);
/**
* Returns process instance information. Will return null if no
* active process with the ID is found
*
* @param deploymentId deployment to which the process instance belongs
* @param processInstanceId The process instance unique identifier
* @return Process instance information
* @throws DeploymentNotFoundException in case the deployment unit was not found
*/
ProcessInstance getProcessInstance(String deploymentId, Long processInstanceId);
/**
* Returns process instance information. Will return null if no
* active process with that correlation key is found
*
* @param correlationKey correlation key assigned to the process instance
* @return Process instance information
* @throws DeploymentNotFoundException in case the deployment unit was not found
*/
ProcessInstance getProcessInstance(CorrelationKey correlationKey);
/**
* Returns process instance information. Will return null if no
* active process with that correlation key is found
*
* @param deploymentId deployment to which the process instance belongs
* @param correlationKey correlation key assigned to the process instance
* @return Process instance information
* @throws DeploymentNotFoundException in case the deployment unit was not found
*/
ProcessInstance getProcessInstance(String deploymentId, CorrelationKey correlationKey);
/**
* Sets a process variable.
* @param processInstanceId The process instance unique identifier
* @param variableId The variable ID to set
* @param value The variable value
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
void setProcessVariable(Long processInstanceId, String variableId, Object value);
/**
* Sets a process variable.
*
* @param deploymentId deployment to which the process instance belongs
* @param processInstanceId The process instance unique identifier
* @param variableId The variable id to set.
* @param value The variable value.
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
void setProcessVariable(String deploymentId, Long processInstanceId, String variableId, Object value);
/**
* Sets process variables.
*
* @param processInstanceId The process instance unique identifier
* @param variables map of process variables (key = variable name, value = variable value)
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
void setProcessVariables(Long processInstanceId, Map<String, Object> variables);
/**
* Sets process variables.
*
* @param deploymentId deployment to which the process instance belongs
* @param processInstanceId The process instance unique identifier
* @param variables map of process variables (key = variable name, value = variable value)
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
void setProcessVariables(String deploymentId, Long processInstanceId, Map<String, Object> variables);
/**
* Gets a process instance variable.
*
* @param processInstanceId the process instance unique identifier
* @param variableName the variable name to get from the process
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
Object getProcessInstanceVariable(Long processInstanceId, String variableName);
/**
* Gets a process instance variable.
*
* @param deploymentId deployment to which the process instance belongs
* @param processInstanceId the process instance unique identifier
* @param variableName the variable name to get from the process
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
Object getProcessInstanceVariable(String deploymentId, Long processInstanceId, String variableName);
/**
* Gets a process instance variable values.
*
* @param processInstanceId The process instance unique identifier
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
Map<String, Object> getProcessInstanceVariables(Long processInstanceId);
/**
* Gets a process instance variable values.
*
* @param deploymentId deployment to which the process instance belongs
* @param processInstanceId The process instance unique identifier
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
Map<String, Object> getProcessInstanceVariables(String deploymentId, Long processInstanceId);
/**
* Returns all signals available in current state of given process instance
*
* @param processInstanceId process instance ID
* @return list of available signals or empty list if no signals are available
*/
Collection<String> getAvailableSignals(Long processInstanceId);
/**
* Returns all signals available in current state of given process instance
*
* @param deploymentId deployment to which the process instance belongs
* @param processInstanceId process instance ID
* @return list of available signals or empty list if no signals are available
*/
Collection<String> getAvailableSignals(String deploymentId, Long processInstanceId);
/**
* Completes the specified WorkItem with the given results
*
* @param id workItem ID
* @param results results of the workItem
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws WorkItemNotFoundException in case a work item with the given ID was not found
*/
void completeWorkItem(Long id, Map<String, Object> results);
/**
* Completes the specified WorkItem with the given results
*
* @param deploymentId deployment to which the process instance belongs
* @param processInstanceId process instance ID to which the work item belongs
* @param id workItem ID
* @param results results of the workItem
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws WorkItemNotFoundException in case a work item with the given ID was not found
*/
void completeWorkItem(String deploymentId, Long processInstanceId, Long id, Map<String, Object> results);
/**
* Abort the specified workItem
*
* @param id workItem ID
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws WorkItemNotFoundException in case a work item with the given ID was not found
*/
void abortWorkItem(Long id);
/**
* Abort the specified workItem
*
* @param deploymentId deployment to which the process instance belongs
* @param processInstanceId process instance ID to which the work item belongs
* @param id workItem ID
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws WorkItemNotFoundException in case a work item with the given ID was not found
*/
void abortWorkItem(String deploymentId, Long processInstanceId, Long id);
/**
* Returns the specified workItem
*
* @param id workItem ID
* @return The specified workItem
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws WorkItemNotFoundException in case a work item with the given ID was not found
*/
WorkItem getWorkItem(Long id);
/**
* Returns the specified workItem
*
* @param deploymentId deployment to which the process instance belongs
* @param processInstanceId process instance ID to which the work item belongs
* @param id workItem ID
* @return The specified workItem
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws WorkItemNotFoundException in case a work item with the given ID was not found
*/
WorkItem getWorkItem(String deploymentId, Long processInstanceId, Long id);
/**
* Returns active work items by process instance ID.
*
* @param processInstanceId process instance ID
* @return The list of active workItems for the process instance
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
List<WorkItem> getWorkItemByProcessInstance(Long processInstanceId);
/**
* Returns active work items by process instance ID.
*
* @param deploymentId deployment to which the process instance belongs
* @param processInstanceId process instance ID
* @return The list of active workItems for the process instance
* @throws DeploymentNotFoundException in case the deployment unit was not found
* @throws ProcessInstanceNotFoundException in case a process instance with the given ID was not found
*/
List<WorkItem> getWorkItemByProcessInstance(String deploymentId, Long processInstanceId);
/**
* Executes the provided command on the underlying command executor (usually KieSession)
* @param deploymentId deployment identifier
* @param command actual command for execution
* @return results of the command execution
* @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist
* @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active for restricted commands (for example, start process)
*/
public <T> T execute(String deploymentId, Command<T> command);
/**
* Executes the provided command on the underlying command executor (usually KieSession)
* @param deploymentId deployment identifier
* @param context context implementation to be used to get the runtime engine
* @param command actual command for execution
* @return results of the command execution
* @throws DeploymentNotFoundException in case a deployment with the given deployment identifier does not exist
* @throws DeploymentNotActiveException in case the deployment with the given deployment identifier is not active for restricted commands (for example, start process)
*/
public <T> T execute(String deploymentId, Context<?> context, Command<T> command);
}
Runtime Data Service
You can use the runtime data service to retrieve all runtime information about processes, such as started process instances and executed node instances.
For example, you can build a list-based UI to show process definitions, process instances, tasks for a given user, and other data, based on information provided by the runtime data service.
This service is optimized to be as efficient as possible while providing all required information.
The following examples show various usage of this service.
Collection definitions = runtimeDataService.getProcesses(new QueryContext());
Collection<processinstancedesc> instances = runtimeDataService.getProcessInstances(new QueryContext());
Collection<nodeinstancedesc> instances = runtimeDataService.getProcessInstanceHistoryActive(processInstanceId, new QueryContext());
john
List<tasksummary> taskSummaries = runtimeDataService.getTasksAssignedAsPotentialOwner("john", new QueryFilter(0, 10));
The runtime data service methods support two important parameters, QueryContext
and QueryFilter
. QueryFilter
is an extension of QueryContext
. You can use these parameters to manage the result set, providing pagination, sorting, and ordering. You can also use them to apply additional filtering when searching for user tasks.
The following definition shows the methods of the RuntimeDataService
interface:
RuntimeDataService
interfacepublic interface RuntimeDataService {
/**
* Represents type of node instance log entries
*
*/
enum EntryType {
START(0),
END(1),
ABORTED(2),
SKIPPED(3),
OBSOLETE(4),
ERROR(5);
}
// Process instance information
/**
* Returns a list of process instance descriptions
* @param queryContext control parameters for the result, such as sorting and paging
* @return list of {@link ProcessInstanceDesc} instances representing the available process instances
*/
Collection<ProcessInstanceDesc> getProcessInstances(QueryContext queryContext);
/**
* Returns a list of all process instance descriptions with the given statuses and initiated by <code>initiator</code>
* @param states list of possible state (int) values that the {@link ProcessInstance} can have
* @param initiator the initiator of the {@link ProcessInstance}
* @param queryContext control parameters for the result, such as sorting and paging
* @return list of {@link ProcessInstanceDesc} instances representing the process instances that match
* the given criteria (states and initiator)
*/
Collection<ProcessInstanceDesc> getProcessInstances(List<Integer> states, String initiator, QueryContext queryContext);
/**
* Returns a list of process instance descriptions found for the given process ID and statuses and initiated by <code>initiator</code>
* @param states list of possible state (int) values that the {@link ProcessInstance} can have
* @param processId ID of the {@link Process} (definition) used when starting the process instance
* @param initiator initiator of the {@link ProcessInstance}
* @param queryContext control parameters for the result, such as sorting and paging
* @return list of {@link ProcessInstanceDesc} instances representing the process instances that match
* the given criteria (states, processId, and initiator)
*/
Collection<ProcessInstanceDesc> getProcessInstancesByProcessId(List<Integer> states, String processId, String initiator, QueryContext queryContext);
/**
* Returns a list of process instance descriptions found for the given process name and statuses and initiated by <code>initiator</code>
* @param states list of possible state (int) values that the {@link ProcessInstance} can have
* @param processName name (not ID) of the {@link Process} (definition) used when starting the process instance
* @param initiator initiator of the {@link ProcessInstance}
* @param queryContext control parameters for the result, such as sorting and paging
* @return list of {@link ProcessInstanceDesc} instances representing the process instances that match
* the given criteria (states, processName and initiator)
*/
Collection<ProcessInstanceDesc> getProcessInstancesByProcessName(List<Integer> states, String processName, String initiator, QueryContext queryContext);
/**
* Returns a list of process instance descriptions found for the given deployment ID and statuses
* @param deploymentId deployment ID of the runtime
* @param states list of possible state (int) values that the {@link ProcessInstance} can have
* @param queryContext control parameters for the result, such as sorting and paging
* @return list of {@link ProcessInstanceDesc} instances representing the process instances that match
* the given criteria (deploymentId and states)
*/
Collection<ProcessInstanceDesc> getProcessInstancesByDeploymentId(String deploymentId, List<Integer> states, QueryContext queryContext);
/**
* Returns process instance descriptions found for the given processInstanceId. If no descriptions are found, null is returned. At the same time, the method
* fetches all active tasks (in status: Ready, Reserved, InProgress) to provide the information about what user task is keeping each instance
* and who owns the task (if the task is already claimed by a user)
* @param processInstanceId ID of the process instance to be fetched
* @return process instance information, in the form of a {@link ProcessInstanceDesc} instance
*/
ProcessInstanceDesc getProcessInstanceById(long processInstanceId);
/**
* Returns the active process instance description found for the given correlation key. If none is found, returns null. At the same time it
* fetches all active tasks (in status: Ready, Reserved, InProgress) to provide information about which user task is keeping each instance
* and who owns the task (if the task is already claimed by a user)
* @param correlationKey correlation key assigned to the process instance
* @return process instance information, in the form of a {@link ProcessInstanceDesc} instance
*/
ProcessInstanceDesc getProcessInstanceByCorrelationKey(CorrelationKey correlationKey);
/**
* Returns process instances descriptions (regardless of their states) found for the given correlation key. If no descriptions are found, an empty list is returned
* This query uses 'LIKE' to match correlation keys so it accepts partial keys. Matching
* is performed based on a 'starts with' criterion
* @param correlationKey correlation key assigned to the process instance
* @return list of {@link ProcessInstanceDesc} instances representing the process instances that match
* the given correlation key
*/
Collection<ProcessInstanceDesc> getProcessInstancesByCorrelationKey(CorrelationKey correlationKey, QueryContext queryContext);
/**
* Returns process instance descriptions, filtered by their states, that were found for the given correlation key. If none are found, returns an empty list
* This query uses 'LIKE' to match correlation keys so it accepts partial keys. Matching
* is performed based on a 'starts with' criterion
* @param correlationKey correlation key assigned to process instance
* @param states list of possible state (int) values that the {@link ProcessInstance} can have
* @return list of {@link ProcessInstanceDesc} instances representing the process instances that match
* the given correlation key
*/
Collection<ProcessInstanceDesc> getProcessInstancesByCorrelationKeyAndStatus(CorrelationKey correlationKey, List<Integer> states, QueryContext queryContext);
/**
* Returns a list of process instance descriptions found for the given process definition ID
* @param processDefId ID of the process definition
* @param queryContext control parameters for the result, such as sorting and paging
* @return list of {@link ProcessInstanceDesc} instances representing the process instances that match
* the given criteria (deploymentId and states)
*/
Collection<ProcessInstanceDesc> getProcessInstancesByProcessDefinition(String processDefId, QueryContext queryContext);
/**
* Returns a list of process instance descriptions found for the given process definition ID, filtered by state
* @param processDefId ID of the process definition
* @param states list of possible state (int) values that the {@link ProcessInstance} can have
* @param queryContext control parameters for the result, such as sorting and paging
* @return list of {@link ProcessInstanceDesc} instances representing the process instances that match
* the given criteria (deploymentId and states)
*/
Collection<ProcessInstanceDesc> getProcessInstancesByProcessDefinition(String processDefId, List<Integer> states, QueryContext queryContext);
/**
* Returns process instance descriptions that match process instances that have the given variable defined, filtered by state
* @param variableName name of the variable that process instance should have
* @param states list of possible state (int) values that the {@link ProcessInstance} can have. If null, returns only active instances
* @param queryContext control parameters for the result, such as sorting and paging
* @return list of {@link ProcessInstanceDesc} instances representing the process instances that have the given variable defined
*/
Collection<ProcessInstanceDesc> getProcessInstancesByVariable(String variableName, List<Integer> states, QueryContext queryContext);
/**
* Returns process instance descriptions that match process instances that have the given variable defined and the value of the variable matches the given variableValue
* @param variableName name of the variable that process instance should have
* @param variableValue value of the variable to match
* @param states list of possible state (int) values that the {@link ProcessInstance} can have. If null, returns only active instances
* @param queryContext control parameters for the result, such as sorting and paging
* @return list of {@link ProcessInstanceDesc} instances representing the process instances that have the given variable defined with the given value
*/
Collection<ProcessInstanceDesc> getProcessInstancesByVariableAndValue(String variableName, String variableValue, List<Integer> states, QueryContext queryContext);
/**
* Returns a list of process instance descriptions that have the specified parent
* @param parentProcessInstanceId ID of the parent process instance
* @param states list of possible state (int) values that the {@link ProcessInstance} can have. If null, returns only active instances
* @param queryContext control parameters for the result, such as sorting and paging
* @return list of {@link ProcessInstanceDesc} instances representing the available process instances
*/
Collection<ProcessInstanceDesc> getProcessInstancesByParent(Long parentProcessInstanceId, List<Integer> states, QueryContext queryContext);
/**
* Returns a list of process instance descriptions that are subprocesses of the specified process, or subprocesses of those subprocesses, and so on. The list includes the full hierarchy of subprocesses under the specified parent process
* @param processInstanceId ID of the parent process instance
* @return list of {@link ProcessInstanceDesc} instances representing the full hierarchy of this process
*/
Collection<ProcessInstanceDesc> getProcessInstancesWithSubprocessByProcessInstanceId(Long processInstanceId, List<Integer> states, QueryContext queryContext);
// Node and Variable instance information
/**
* Returns the active node instance descriptor for the given work item ID, if the work item exists and is active
* @param workItemId identifier of the work item
* @return NodeInstanceDesc for work item if it exists and is still active, otherwise null is returned
*/
NodeInstanceDesc getNodeInstanceForWorkItem(Long workItemId);
/**
* Returns a trace of all active nodes for the given process instance ID
* @param processInstanceId unique identifier of the process instance
* @param queryContext control parameters for the result, such as sorting and paging
* @return
*/
Collection<NodeInstanceDesc> getProcessInstanceHistoryActive(long processInstanceId, QueryContext queryContext);
/**
* Returns a trace of all executed (completed) nodes for the given process instance ID
* @param processInstanceId unique identifier of the process instance
* @param queryContext control parameters for the result, such as sorting and paging
* @return
*/
Collection<NodeInstanceDesc> getProcessInstanceHistoryCompleted(long processInstanceId, QueryContext queryContext);
/**
* Returns a complete trace of all executed (completed) and active nodes for the given process instance ID
* @param processInstanceId unique identifier of the process instance
* @param queryContext control parameters for the result, such as sorting and paging
* @return {@link NodeInstance} information, in the form of a list of {@link NodeInstanceDesc} instances,
* that come from a process instance that matches the given criteria (deploymentId, processId)
*/
Collection<NodeInstanceDesc> getProcessInstanceFullHistory(long processInstanceId, QueryContext queryContext);
/**
* Returns a complete trace of all events of the given type (START, END, ABORTED, SKIPPED, OBSOLETE or ERROR) for the given process instance
* @param processInstanceId unique identifier of the process instance
* @param queryContext control parameters for the result, such as sorting and paging
* @param type type of events to be returned (START, END, ABORTED, SKIPPED, OBSOLETE or ERROR). To return all events, use {@link #getProcessInstanceFullHistory(long, QueryContext)}
* @return collection of node instance descriptions
*/
Collection<NodeInstanceDesc> getProcessInstanceFullHistoryByType(long processInstanceId, EntryType type, QueryContext queryContext);
/**
* Returns a trace of all nodes for the given node types and process instance ID
* @param processInstanceId unique identifier of the process instance
* @param nodeTypes list of node types to filter nodes of the process instance
* @param queryContext control parameters for the result, such as sorting and paging
* @return collection of node instance descriptions
*/
Collection<NodeInstanceDesc> getNodeInstancesByNodeType(long processInstanceId, List<String> nodeTypes, QueryContext queryContext);
/**
* Returns a trace of all nodes for the given node types and correlation key
* @param correlationKey correlation key
* @param states list of states
* @param nodeTypes list of node types to filter nodes of process instance
* @param queryContext control parameters for the result, such as sorting and paging
* @return collection of node instance descriptions
*/
Collection<NodeInstanceDesc> getNodeInstancesByCorrelationKeyNodeType(CorrelationKey correlationKey, List<Integer> states, List<String> nodeTypes, QueryContext queryContext);
/**
* Returns a collection of all process variables and their current values for the given process instance
* @param processInstanceId process instance ID
* @return information about variables in the specified process instance,
* represented by a list of {@link VariableDesc} instances
*/
Collection<VariableDesc> getVariablesCurrentState(long processInstanceId);
/**
* Returns a collection of changes to the given variable within the scope of a process instance
* @param processInstanceId unique identifier of the process instance
* @param variableId ID of the variable
* @param queryContext control parameters for the result, such as sorting and paging
* @return information about the variable with the given ID in the specified process instance,
* represented by a list of {@link VariableDesc} instances
*/
Collection<VariableDesc> getVariableHistory(long processInstanceId, String variableId, QueryContext queryContext);
// Process information
/**
* Returns a list of process definitions for the given deployment ID
* @param deploymentId deployment ID of the runtime
* @param queryContext control parameters for the result, such as sorting and paging
* @return list of {@link ProcessDefinition} instances representing processes that match
* the given criteria (deploymentId)
*/
Collection<ProcessDefinition> getProcessesByDeploymentId(String deploymentId, QueryContext queryContext);
/**
* Returns a list of process definitions that match the given filter
* @param filter regular expression
* @param queryContext control parameters for the result, such as sorting and paging
* @return list of {@link ProcessDefinition} instances with a name or ID that matches the given regular expression
*/
Collection<ProcessDefinition> getProcessesByFilter(String filter, QueryContext queryContext);
/**
* Returns all process definitions available
* @param queryContext control parameters for the result, such as sorting and paging
* @return list of all available processes, in the form a of a list of {@link ProcessDefinition} instances
*/
Collection<ProcessDefinition> getProcesses(QueryContext queryContext);
/**
* Returns a list of process definition identifiers for the given deployment ID
* @param deploymentId deployment ID of the runtime
* @param queryContext control parameters for the result, such as sorting and paging
* @return list of all available process id's for a particular deployment/runtime
*/
Collection<String> getProcessIds(String deploymentId, QueryContext queryContext);
/**
* Returns process definitions for the given process ID regardless of the deployment
* @param processId ID of the process
* @return collection of {@link ProcessDefinition} instances representing the {@link Process}
* with the specified process ID
*/
Collection<ProcessDefinition> getProcessesById(String processId);
/**
* Returns the process definition for the given deployment and process identifiers
* @param deploymentId ID of the deployment (runtime)
* @param processId ID of the process
* @return {@link ProcessDefinition} instance, representing the {@link Process}
* that is present in the specified deployment with the specified process ID
*/
ProcessDefinition getProcessesByDeploymentIdProcessId(String deploymentId, String processId);
// user task query operations
/**
* Return a task by its workItemId
* @param workItemId
* @return @{@link UserTaskInstanceDesc} task
*/
UserTaskInstanceDesc getTaskByWorkItemId(Long workItemId);
/**
* Return a task by its taskId
* @param taskId
* @return @{@link UserTaskInstanceDesc} task
*/
UserTaskInstanceDesc getTaskById(Long taskId);
/**
* Return a task by its taskId with SLA data if the withSLA param is true
* @param taskId
* @param withSLA
* @return @{@link UserTaskInstanceDesc} task
*/
UserTaskInstanceDesc getTaskById(Long taskId, boolean withSLA);
/**
* Return a list of assigned tasks for a Business Administrator user. Business
* administrators play the same role as task stakeholders but at task type
* level. Therefore, business administrators can perform the exact same
* operations as task stakeholders. Business administrators can also observe
* the progress of notifications
*
* @param userId identifier of the Business Administrator user
* @param filter filter for the list of assigned tasks
* @return list of @{@link TaskSummary} task summaries
*/
List<TaskSummary> getTasksAssignedAsBusinessAdministrator(String userId, QueryFilter filter);
/**
* Return a list of assigned tasks for a Business Administrator user for with one of the listed
* statuses
* @param userId identifier of the Business Administrator user
* @param statuses the statuses of the tasks to return
* @param filter filter for the list of assigned tasks
* @return list of @{@link TaskSummary} task summaries
*/
List<TaskSummary> getTasksAssignedAsBusinessAdministratorByStatus(String userId, List<Status> statuses, QueryFilter filter);
/**
* Return a list of tasks that a user is eligible to own
*
* @param userId identifier of the user
* @param filter filter for the list of tasks
* @return list of @{@link TaskSummary} task summaries
*/
List<TaskSummary> getTasksAssignedAsPotentialOwner(String userId, QueryFilter filter);
/**
* Return a list of tasks the user or user groups are eligible to own
*
* @param userId identifier of the user
* @param groupIds a list of identifiers of the groups
* @param filter filter for the list of tasks
* @return list of @{@link TaskSummary} task summaries
*/
List<TaskSummary> getTasksAssignedAsPotentialOwner(String userId, List<String> groupIds, QueryFilter filter);
/**
* Return a list of tasks the user is eligible to own and that are in one of the listed
* statuses
*
* @param userId identifier of the user
* @param status filter for the task statuses
* @param filter filter for the list of tasks
* @return list of @{@link TaskSummary} task summaries
*/
List<TaskSummary> getTasksAssignedAsPotentialOwnerByStatus(String userId, List<Status> status, QueryFilter filter);
/**
* Return a list of tasks the user or groups are eligible to own and that are in one of the listed
* statuses
* @param userId identifier of the user
* @param groupIds filter for the identifiers of the groups
* @param status filter for the task statuses
* @param filter filter for the list of tasks
* @return list of @{@link TaskSummary} task summaries
*/
List<TaskSummary> getTasksAssignedAsPotentialOwner(String userId, List<String> groupIds, List<Status> status, QueryFilter filter);
/**
* Return a list of tasks the user is eligible to own, that are in one of the listed
* statuses, and that have an expiration date starting at <code>from</code>. Tasks that do not have expiration date set
* will also be included in the result set
*
* @param userId identifier of the user
* @param status filter for the task statuses
* @param from earliest expiration date for the tasks
* @param filter filter for the list of tasks
* @return list of @{@link TaskSummary} task summaries
*/
List<TaskSummary> getTasksAssignedAsPotentialOwnerByExpirationDateOptional(String userId, List<Status> status, Date from, QueryFilter filter);
/**
* Return a list of tasks the user has claimed, that are in one of the listed
* statuses, and that have an expiration date starting at <code>from</code>. Tasks that do not have expiration date set
* will also be included in the result set
*
* @param userId identifier of the user
* @param strStatuses filter for the task statuses
* @param from earliest expiration date for the tasks
* @param filter filter for the list of tasks
* @return list of @{@link TaskSummary} task summaries
*/
List<TaskSummary> getTasksOwnedByExpirationDateOptional(String userId, List<Status> strStatuses, Date from, QueryFilter filter);
/**
* Return a list of tasks the user has claimed
*
* @param userId identifier of the user
* @param filter filter for the list of tasks
* @return list of @{@link TaskSummary} task summaries
*/
List<TaskSummary> getTasksOwned(String userId, QueryFilter filter);
/**
* Return a list of tasks the user has claimed with one of the listed
* statuses
*
* @param userId identifier of the user
* @param status filter for the task statuses
* @param filter filter for the list of tasks
* @return list of @{@link TaskSummary} task summaries
*/
List<TaskSummary> getTasksOwnedByStatus(String userId, List<Status> status, QueryFilter filter);
/**
* Get a list of tasks the Process Instance is waiting on
*
* @param processInstanceId identifier of the process instance
* @return list of task identifiers
*/
List<Long> getTasksByProcessInstanceId(Long processInstanceId);
/**
* Get filter for the tasks the Process Instance is waiting on that are in one of the
* listed statuses
*
* @param processInstanceId identifier of the process instance
* @param status filter for the task statuses
* @param filter filter for the list of tasks
* @return list of @{@link TaskSummary} task summaries
*/
List<TaskSummary> getTasksByStatusByProcessInstanceId(Long processInstanceId, List<Status> status, QueryFilter filter);
/**
* Get a list of task audit logs for all tasks owned by the user, applying a query filter to the list of tasks
*
*
* @param userId identifier of the user that owns the tasks
* @param filter filter for the list of tasks
* @return list of @{@link AuditTask} task audit logs
*/
List<AuditTask> getAllAuditTask(String userId, QueryFilter filter);
/**
* Get a list of task audit logs for all tasks that are active and owned by the user, applying a query filter to the list of tasks
*
* @param userId identifier of the user that owns the tasks
* @param filter filter for the list of tasks
* @return list of @{@link AuditTask} audit tasks
*/
List<AuditTask> getAllAuditTaskByStatus(String userId, QueryFilter filter);
/**
* Get a list of task audit logs for group tasks (actualOwner == null) for the user, applying a query filter to the list of tasks
*
* @param userId identifier of the user that is associated with the group tasks
* @param filter filter for the list of tasks
* @return list of @{@link AuditTask} audit tasks
*/
List<AuditTask> getAllGroupAuditTask(String userId, QueryFilter filter);
/**
* Get a list of task audit logs for tasks that are assigned to a Business Administrator user, applying a query filter to the list of tasks
*
* @param userId identifier of the Business Administrator user
* @param filter filter for the list of tasks
* @return list of @{@link AuditTask} audit tasks
*/
List<AuditTask> getAllAdminAuditTask(String userId, QueryFilter filter);
/**
* Gets a list of task events for the given task
* @param taskId identifier of the task
* @param filter for the list of events
* @return list of @{@link TaskEvent} task events
*/
List<TaskEvent> getTaskEvents(long taskId, QueryFilter filter);
/**
* Query on {@link TaskSummary} instances
* @param userId the user associated with the tasks queried
* @return {@link TaskSummaryQueryBuilder} used to create the query
*/
TaskSummaryQueryBuilder taskSummaryQuery(String userId);
/**
* Gets a list of {@link TaskSummary} instances for tasks that define a given variable
* @param userId the ID of the user associated with the tasks
* @param variableName the name of the task variable
* @param statuses the list of statuses that the task can have
* @param queryContext the query context
* @return a {@link List} of {@link TaskSummary} instances
*/
List<TaskSummary> getTasksByVariable(String userId, String variableName, List<Status> statuses, QueryContext queryContext);
/**
* Gets a list of {@link TaskSummary} instances for tasks that define a given variable and the variable is set to the given value
* @param userId the ID of the user associated with the tasks
* @param variableName the name of the task variable
* @param variableValue the value of the task variable
* @param statuses the list of statuses that the task can have
* @param context the query context
* @return a {@link List} of {@link TaskSummary} instances
*/
List<TaskSummary> getTasksByVariableAndValue(String userId, String variableName, String variableValue, List<Status> statuses, QueryContext context);
}
User Task Service
The user task service covers the complete lifecycle of an individual task, and you can use the service to manage a user task from start to end.
Task queries are not a part of the user task service. Use the runtime data service to query for tasks. Use the user task service for scoped operations on one task, including the following actions:
-
Modification of selected properties
-
Access to task variables
-
Access to task attachments
-
Access to task comments
The user task service is also a command executor. You can use it to execute custom task commands.
The following example shows starting a process and interacting with a task in the process:
long processInstanceId =
processService.startProcess(deployUnit.getIdentifier(), "org.jbpm.writedocument");
List<Long> taskIds =
runtimeDataService.getTasksByProcessInstanceId(processInstanceId);
Long taskId = taskIds.get(0);
userTaskService.start(taskId, "john");
UserTaskInstanceDesc task = runtimeDataService.getTaskById(taskId);
Map<String, Object> results = new HashMap<String, Object>();
results.put("Result", "some document data");
userTaskService.complete(taskId, "john", results);
Quartz-based timer service
The process engine provides a cluster-ready timer service using Quartz. You can use the service to dispose or load your KIE session at any time. The service can manage how long a KIE session is active in order to fire each timer appropriately.
The following example shows a basic Quartz configuration file for a clustered environment:
#============================================================================
# Configure Main Scheduler Properties
#============================================================================
org.quartz.scheduler.instanceName = jBPMClusteredScheduler
org.quartz.scheduler.instanceId = AUTO
#============================================================================
# Configure ThreadPool
#============================================================================
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 5
org.quartz.threadPool.threadPriority = 5
#============================================================================
# Configure JobStore
#============================================================================
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreCMT
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.StdJDBCDelegate
org.quartz.jobStore.useProperties=false
org.quartz.jobStore.dataSource=managedDS
org.quartz.jobStore.nonManagedTXDataSource=nonManagedDS
org.quartz.jobStore.tablePrefix=QRTZ_
org.quartz.jobStore.isClustered=true
org.quartz.jobStore.clusterCheckinInterval = 20000
#=========================================================================
# Configure Datasources
#=========================================================================
org.quartz.dataSource.managedDS.jndiURL=jboss/datasources/psbpmsDS
org.quartz.dataSource.nonManagedDS.jndiURL=jboss/datasources/quartzNonManagedDS
You must modify the previous example to fit your environment.
Query service
The query service provides advanced search capabilities that are based on Dashbuilder data sets.
With this approach, you can control how to retrieve data from underlying data store. You can use complex JOIN
statements with external tables such as JPA entities tables or custom systems database tables.
The query service is built around the following two sets of operations:
-
Management operations:
-
Register a query definition
-
Replace a query definition
-
Unregister (remove) a query definition
-
Get a query definition
-
Get all registered query definitions
-
-
Runtime operations:
-
Simple query based on
QueryParam
as the filter provider -
Advanced query based on
QueryParamBuilder
as the filter provider
-
Dashbuilder data sets provide support for multiple data sources, such as CSV, SQL, and Elastic Search. However, the process engine uses a RDBMS-based backend and focuses on SQL-based data sets.
Therefore, the process engine query service is a subset of Dashbuilder data set capabilities that enables efficient queries with a simple API.
Key classes of the query service
The query service relies on the following key classes:
-
QueryDefinition
: Represents the definition of a data set. The definition consists of a unique name, an SQL expression (the query) and the source, the JNDI name of the data source to use when performing queries. -
QueryParam
: The basic structure that represents an individual query parameter or condition. This structure consists of the column name, operator, and expected values. -
QueryResultMapper
: The class that maps raw dataset data (rows and columns) to an object representation. -
QueryParamBuilder
: The class that builds query filters that are applied to the query definition to invoke the query.
QueryResultMapper
maps data taken from a database (dataset) to an object representation. It is similar to ORM providers such as hibernate
, which map tables to entities.
Many object types can be used for representing dataset results. Therefore, existing mappers might not always suit your needs. Mappers in QueryResultMapper
are pluggable and you can provide your own mapper when necessary, in order to transform dataset data into any type you need.
The process engine supplies the following mappers:
-
org.jbpm.kie.services.impl.query.mapper.ProcessInstanceQueryMapper
, registered with the nameProcessInstances
-
org.jbpm.kie.services.impl.query.mapper.ProcessInstanceWithVarsQueryMapper
, registered with the nameProcessInstancesWithVariables
-
org.jbpm.kie.services.impl.query.mapper.ProcessInstanceWithCustomVarsQueryMapper
, registered with the nameProcessInstancesWithCustomVariables
-
org.jbpm.kie.services.impl.query.mapper.UserTaskInstanceQueryMapper
, registered with the nameUserTasks
-
org.jbpm.kie.services.impl.query.mapper.UserTaskInstanceWithVarsQueryMapper
, registered with the nameUserTasksWithVariables
-
org.jbpm.kie.services.impl.query.mapper.UserTaskInstanceWithCustomVarsQueryMapper
, registered with nameUserTasksWithCustomVariables
-
org.jbpm.kie.services.impl.query.mapper.TaskSummaryQueryMapper
, registered with the nameTaskSummaries
-
org.jbpm.kie.services.impl.query.mapper.RawListQueryMapper
, registered with the nameRawList
Each QueryResultMapper
is registered with a unique string name. You can look up mappers by this name instead of referencing the full class name. This feature is especially important when using EJB remote invocation of services, because it avoids relying on a particular implementation on the client side.
To reference a QueryResultMapper
by the string name, use NamedQueryMapper
, which is a part of the jbpm-services-api
module. This class acts as a delegate (lazy delegate) and looks up the actual mapper when the query is performed.
NamedQueryMapper
queryService.query("my query def", new NamedQueryMapper<Collection<ProcessInstanceDesc>>("ProcessInstances"), new QueryContext());
QueryParamBuilder
provides an advanced way of building filters for data sets.
By default, when you use a query method of QueryService
that accepts zero or more QueryParam
instances, all of these parameters are joined with an AND
operator, so a data entry must match all of them.
However, sometimes more complicated relationships between parameters are required. You can use QueryParamBuilder
to build custom builders that provide filters at the time the query is issued.
One existing implementation of QueryParamBuilder
is available in the process engine. It covers default QueryParams
that are based on the core functions.
These core functions are SQL-based conditions, including the following conditions:
-
IS_NULL
-
NOT_NULL
-
EQUALS_TO
-
NOT_EQUALS_TO
-
LIKE_TO
-
GREATER_THAN
-
GREATER_OR_EQUALS_TO
-
LOWER_THAN
-
LOWER_OR_EQUALS_TO
-
BETWEEN
-
IN
-
NOT_IN
Before invoking a query, the process engine invokes the build method of the QueryParamBuilder
interface as many times as necessary while the method returns a non-null value. Because of this approach, you can build up complex filter options that could not be expressed by a simple list of QueryParams
.
The following example shows a basic implementation of QueryParamBuilder
. It relies on the DashBuilder Dataset API.
QueryParamBuilder
public class TestQueryParamBuilder implements QueryParamBuilder<ColumnFilter> {
private Map<String, Object> parameters;
private boolean built = false;
public TestQueryParamBuilder(Map<String, Object> parameters) {
this.parameters = parameters;
}
@Override
public ColumnFilter build() {
// return null if it was already invoked
if (built) {
return null;
}
String columnName = "processInstanceId";
ColumnFilter filter = FilterFactory.OR(
FilterFactory.greaterOrEqualsTo((Long)parameters.get("min")),
FilterFactory.lowerOrEqualsTo((Long)parameters.get("max")));
filter.setColumnId(columnName);
built = true;
return filter;
}
}
After implementing the builder, you can use an instance of this class when performing a query with the QueryService
service, as shown in the following example:
QueryService
servicequeryService.query("my query def", ProcessInstanceQueryMapper.get(), new QueryContext(), paramBuilder);
Using the query service in a typical scenario
The following procedure outlines the typical way in which your code might use the query service.
-
Define the data set, which is a view of the data you want to use. Use the
QueryDefinition
class in the services API to complete this operation:Defining the data setSqlQueryDefinition query = new SqlQueryDefinition("getAllProcessInstances", "java:jboss/datasources/ExampleDS"); query.setExpression("select * from processinstancelog");
This example represents the simplest possible query definition.
The constructor requires the following parameters:
-
A unique name that identifies the query at run time
-
A JNDI data source name to use for performing queries with this definition
The parameter of the
setExpression()
method is the SQL statement that builds up the data set view. Queries in the query service use data from this view and filter this data as necessary.
-
-
Register the query:
Registering a queryqueryService.registerQuery(query);
-
If required, collect all the data from the dataset, without any filtering:
Collecting all the data from the datasetCollection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), new QueryContext());
This simple query uses defaults from
QueryContext
for paging and sorting. -
If required, use a
QueryContext
object that changes the defaults of the paging and sorting:Changing defaults using aQueryContext
objectQueryContext ctx = new QueryContext(0, 100, "start_date", true); Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), ctx);
-
If required, use the query to filter data:
Using a query to filter data// single filter param Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), new QueryContext(), QueryParam.likeTo(COLUMN_PROCESSID, true, "org.jbpm%")); // multiple filter params (AND) Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), new QueryContext(), QueryParam.likeTo(COLUMN_PROCESSID, true, "org.jbpm%"), QueryParam.in(COLUMN_STATUS, 1, 3));
With the query service, you can define what data to fetch and how to filter it. Limitation of the JPA provider or other similar limitations do not apply. You can tailor database queries to your environment to increase performance.
Advanced query service
The advanced query service provides capabilities to search for processes and tasks, based on process and task attributes, process variables, and internal variables of user tasks. The search automatically covers all existing processes in the process engine.
The names and required values of attributes and variables are defined in QueryParam
objects.
Process attributes include process instance ID, correlation key, process definition ID, and deployment ID. Task attributes include task name, owner, and status.
The following search methods are available:
-
queryProcessByVariables
: Search for process instances based on a list of process attributes and process variable values. To be included in the result, a process instance must have the listed attributes and the listed values in its process variables. -
queryProcessByVariablesAndTask
: Search for process instances based on a list of process attributes, process variable values, and task variable values. To be included in the result, a process instance must have the listed attributes and the listed values in its process variables. It also must include a task with the listed values in its task variables. -
queryUserTasksByVariables
: Search for user tasks based on a list of task attributes, task variable values, and process variable values. To be included in the result, a task must have the listed attributes and listed values in its task variables. It also must be included in a process with the listed values in its process variables.
The service is provided by the AdvanceRuntimeDataService
class. The interface for this class also defines predefined task and process attribute names.
AdvanceRuntimeDataService
interfacepublic interface AdvanceRuntimeDataService {
String TASK_ATTR_NAME = "TASK_NAME";
String TASK_ATTR_OWNER = "TASK_OWNER";
String TASK_ATTR_STATUS = "TASK_STATUS";
String PROCESS_ATTR_INSTANCE_ID = "PROCESS_INSTANCE_ID";
String PROCESS_ATTR_CORRELATION_KEY = "PROCESS_CORRELATION_KEY";
String PROCESS_ATTR_DEFINITION_ID = "PROCESS_DEFINITION_ID";
String PROCESS_ATTR_DEPLOYMENT_ID = "PROCESS_DEPLOYMENT_ID";
String PROCESS_COLLECTION_VARIABLES = "ATTR_COLLECTION_VARIABLES";
List<ProcessInstanceWithVarsDesc> queryProcessByVariables(List<QueryParam> attributes,
List<QueryParam> processVariables, QueryContext queryContext);
List<ProcessInstanceWithVarsDesc> queryProcessByVariablesAndTask(List<QueryParam> attributes,
List<QueryParam> processVariables, List<QueryParam> taskVariables,
List<String> potentialOwners, QueryContext queryContext);
List<UserTaskInstanceWithPotOwnerDesc> queryUserTasksByVariables(List<QueryParam> attributes,
List<QueryParam> taskVariables, List<QueryParam> processVariables,
List<String> potentialOwners, QueryContext queryContext);
}
Process instance migration service
The process instance migration service is a utility for migrating process instances from one deployment to another. Process or task variables are not affected by the migration. However, the new deployment can use a different process definition.
When migrating a process, the process instance migration service also automatically migrates all the subprocesses of the process, the subprocesses of those subprocesses, and so on. If you attempt to migrate a subprocess without migrating the parent process, the migration fails.
For the simplest approach to process migration, let active process instances finish and start new process instances in the new deployment. If this approach is not suitable for your needs, consider the following issues before starting process instance migration:
-
Backward compatibility
-
Data change
-
Need for node mapping
Whenever possible, create backward-compatible processes by extending process definitions. For example, removing nodes from the process definition breaks compatibility. If you make such changes, you must provide node mapping. Process instance migration uses node mapping if an active process instance is in a node that has been removed.
A node map contains source node IDs from the old process definition mapped to target node IDs in the new process definition. You can map nodes of the same type only, such as a user task to a user task.
IBM Business Automation Manager Open Editions offers several implementations of the migration service:
ProcessInstanceMigrationService
interface that implement the migration servicepublic interface ProcessInstanceMigrationService {
/**
* Migrates a given process instance that belongs to the source deployment into the target process ID that belongs to the target deployment.
* The following rules are enforced:
* <ul>
* <li>the source deployment ID must point to an existing deployment</li>
* <li>the process instance ID must point to an existing and active process instance</li>
* <li>the target deployment must exist</li>
* <li>the target process ID must exist in the target deployment</li>
* </ul>
* Returns a migration report regardless of migration being successful or not; examine the report for the outcome of the migration.
* @param sourceDeploymentId deployment to which the process instance to be migrated belongs
* @param processInstanceId ID of the process instance to be migrated
* @param targetDeploymentId ID of the deployment to which the target process belongs
* @param targetProcessId ID of the process to which the process instance should be migrated
* @return returns complete migration report
*/
MigrationReport migrate(String sourceDeploymentId, Long processInstanceId, String targetDeploymentId, String targetProcessId);
/**
* Migrates a given process instance (with node mapping) that belongs to source deployment into the target process ID that belongs to the target deployment.
* The following rules are enforced:
* <ul>
* <li>the source deployment ID must point to an existing deployment</li>
* <li>the process instance ID must point to an existing and active process instance</li>
* <li>the target deployment must exist</li>
* <li>the target process ID must exist in the target deployment</li>
* </ul>
* Returns a migration report regardless of migration being successful or not; examine the report for the outcome of the migration.
* @param sourceDeploymentId deployment to which the process instance to be migrated belongs
* @param processInstanceId ID of the process instance to be migrated
* @param targetDeploymentId ID of the deployment to which the target process belongs
* @param targetProcessId ID of the process to which the process instance should be migrated
* @param nodeMapping node mapping - source and target unique IDs of nodes to be mapped - from process instance active nodes to new process nodes
* @return returns complete migration report
*/
MigrationReport migrate(String sourceDeploymentId, Long processInstanceId, String targetDeploymentId, String targetProcessId, Map<String, String> nodeMapping);
/**
* Migrates given process instances that belong to the source deployment into a target process ID that belongs to the target deployment.
* The following rules are enforced:
* <ul>
* <li>the source deployment ID must point to an existing deployment</li>
* <li>the process instance ID must point to an existing and active process instance</li>
* <li>the target deployment must exist</li>
* <li>the target process ID must exist in the target deployment</li>
* </ul>
* Returns a migration report regardless of migration being successful or not; examine the report for the outcome of the migration.
* @param sourceDeploymentId deployment to which the process instances to be migrated belong
* @param processInstanceIds list of process instance IDs to be migrated
* @param targetDeploymentId ID of the deployment to which the target process belongs
* @param targetProcessId ID of the process to which the process instances should be migrated
* @return returns complete migration report
*/
List<MigrationReport> migrate(String sourceDeploymentId, List<Long> processInstanceIds, String targetDeploymentId, String targetProcessId);
/**
* Migrates given process instances (with node mapping) that belong to the source deployment into a target process ID that belongs to the target deployment.
* The following rules are enforced:
* <ul>
* <li>the source deployment ID must point to an existing deployment</li>
* <li>the process instance ID must point to an existing and active process instance</li>
* <li>the target deployment must exist</li>
* <li>the target process ID must exist in the target deployment</li>
* </ul>
* Returns a migration report regardless of migration being successful or not; examine the report for the outcome of the migration.
* @param sourceDeploymentId deployment to which the process instances to be migrated belong
* @param processInstanceIds list of process instance ID to be migrated
* @param targetDeploymentId ID of the deployment to which the target process belongs
* @param targetProcessId ID of the process to which the process instances should be migrated
* @param nodeMapping node mapping - source and target unique IDs of nodes to be mapped - from process instance active nodes to new process nodes
* @return returns list of migration reports one per each process instance
*/
List<MigrationReport> migrate(String sourceDeploymentId, List<Long> processInstanceIds, String targetDeploymentId, String targetProcessId, Map<String, String> nodeMapping);
}
To migrate process instances on a KIE Server, use the following implementations. These methods are similar to the methods in the ProcessInstanceMigrationService
interface, providing the same migration implementations for KIE Server deployments.
ProcessAdminServicesClient
interface that implement the migration service for KIE Server deploymentspublic interface ProcessAdminServicesClient {
MigrationReportInstance migrateProcessInstance(String containerId, Long processInstanceId, String targetContainerId, String targetProcessId);
MigrationReportInstance migrateProcessInstance(String containerId, Long processInstanceId, String targetContainerId, String targetProcessId, Map<String, String> nodeMapping);
List<MigrationReportInstance> migrateProcessInstances(String containerId, List<Long> processInstancesId, String targetContainerId, String targetProcessId);
List<MigrationReportInstance> migrateProcessInstances(String containerId, List<Long> processInstancesId, String targetContainerId, String targetProcessId, Map<String, String> nodeMapping);
}
You can migrate a single process instance or multiple process instances at once. If you migrate multiple process instances, each instance is migrated in a separate transaction to ensure that the migrations do not affect each other.
After migration is completed, the migrate
method returns a MigrationReport
object that contains the following information:
-
The start and end dates of the migration.
-
The migration outcome (success or failure).
-
A log entry of the
INFO
,WARN
, orERROR
type. TheERROR
message terminates the migration.
The following example shows a process instance migration:
import org.kie.server.api.model.admin.MigrationReportInstance;
import org.kie.server.api.marshalling.MarshallingFormat;
import org.kie.server.client.KieServicesClient;
import org.kie.server.client.KieServicesConfiguration;
public class ProcessInstanceMigrationTest{
private static final String SOURCE_CONTAINER = "com.redhat:MigrateMe:1.0";
private static final String SOURCE_PROCESS_ID = "MigrateMe.MigrateMev1";
private static final String TARGET_CONTAINER = "com.redhat:MigrateMe:2";
private static final String TARGET_PROCESS_ID = "MigrateMe.MigrateMeV2";
public static void main(String[] args) {
KieServicesConfiguration config = KieServicesFactory.newRestConfiguration("http://HOST:PORT/kie-server/services/rest/server", "USERNAME", "PASSWORD");
config.setMarshallingFormat(MarshallingFormat.JSON);
KieServicesClient client = KieServicesFactory.newKieServicesClient(config);
long sourcePid = client.getProcessClient().startProcess(SOURCE_CONTAINER, SOURCE_PROCESS_ID);
// Use the 'report' object to return migration results.
MigrationReportInstance report = client.getAdminClient().migrateProcessInstance(SOURCE_CONTAINER, sourcePid,TARGET_CONTAINER, TARGET_PROCESS_ID);
System.out.println("Was migration successful:" + report.isSuccessful());
client.getProcessClient().abortProcessInstance(TARGET_CONTAINER, sourcePid);
}
}
Known limitations of process instance migration
The following situations can cause a failure of the migration or incorrect migration:
-
A new or modified task requires inputs that are not available in the migrated process instance.
-
You modify the tasks prior to the active task where the changes have an impact on further processing.
-
You remove a human task that is currently active. To replace a human task, you must map it to another human task.
-
You add a new task parallel to the single active task. As all branches in an
AND
gateway are not activated, the process gets stuck. -
You remove active timer events (these events are not changed in the database).
-
You fix or update inputs and outputs in an active task (the task data is not migrated).
If you apply mapping to a task node, only the task node name and description are mapped. Other task fields, including the TaskName
variable, are not mapped to the new task.
Deployments and different process versions
The deployment service puts business assets into an execution environment. However, in some cases additional management is required to make the assets available in the correct context. Notably, if you deploy several versions of the same process, you must ensure that process instances use the correct version.
Activation and Deactivation of deployments
In some cases, a number of process instances are running on a deployment, and then you add a new version of the same process to the runtime environment.
You might decide that new instances of this process definition must use the new version while the existing active instances should continue with the previous version.
To enable this scenario, use the following methods of the deployment service:
-
activate
: Activates a deployment so it can be available for interaction. You can list its process definitions and start new process instances for this deployment. -
deactivate
: Deactivates a deployment. Disables the option to list process definitions and to start new process instances of processes in the deployment. However, you can continue working with the process instances that are already active, for example, signal events and interact with user tasks.
You can use this feature for smooth transition between project versions without the need for process instance migration.
Invocation of the latest version of a process
If you need to use the latest version of the project’s process, you can use the latest
keyword to interact with several operations in services. This approach is supported only when the process identifier remains the same in all versions.
The following example explains the feature.
The initial deployment unit is org.jbpm:HR:1.0
. It contains the first version of a hiring process.
After several weeks, you develop a new version and deploy it to the execution server as org.jbpm:HR.2.0
. It includes version 2 of the hiring process.
If you want to call the process and ensure that you use the latest version, you can use the following deployment ID:
org.jbpm.HR:latest
If you use this deployment ID, the process engine finds the latest available version of the project. It uses the following identifiers:
-
groupId
:org.jbpm
-
artifactId
:HR
The version numbers are compared by Maven rules to find the latest version.
The following code example shows deployment of multiple versions and interacting with the latest version:
KModuleDeploymentUnit deploymentUnitV1 = new KModuleDeploymentUnit("org.jbpm", "HR", "1.0");
deploymentService.deploy(deploymentUnitV1);
long processInstanceId = processService.startProcess("org.jbpm:HR:LATEST", "customtask");
ProcessInstanceDesc piDesc = runtimeDataService.getProcessInstanceById(processInstanceId);
// We have started a process with the project version 1
assertEquals(deploymentUnitV1.getIdentifier(), piDesc.getDeploymentId());
// Next we deploy version 2
KModuleDeploymentUnit deploymentUnitV2 = new KModuleDeploymentUnit("org.jbpm", "HR", "2.0");
deploymentService.deploy(deploymentUnitV2);
processInstanceId = processService.startProcess("org.jbpm:HR:LATEST", "customtask");
piDesc = runtimeDataService.getProcessInstanceById(processInstanceId);
// This time we have started a process with the project version 2
assertEquals(deploymentUnitV2.getIdentifier(), piDesc.getDeploymentId());
Note
|
This feature is also available in the KIE Server REST API. When sending a request with a deployment ID, you can use |
Deployment synchronization
Process engine services include a deployment synchronizer that stores available deployments into a database, including the deployment descriptor for every deployment.
The synchronizer also monitors this table to keep it in sync with other installations that might be using the same data source. This functionality is especially important when running in a cluster or when Business Central and a custom application must operate on the same artifacts.
By default, when running core services, you must configure synchronization. For EJB and CDI extensions, synchronization is enabled automatically.
The following code sample configures synchronization:
TransactionalCommandService commandService = new TransactionalCommandService(emf);
DeploymentStore store = new DeploymentStore();
store.setCommandService(commandService);
DeploymentSynchronizer sync = new DeploymentSynchronizer();
sync.setDeploymentService(deploymentService);
sync.setDeploymentStore(store);
DeploymentSyncInvoker invoker = new DeploymentSyncInvoker(sync, 2L, 3L, TimeUnit.SECONDS);
invoker.start();
....
invoker.stop();
With this configuration, deployments are synchronized every three seconds with an initial delay of two seconds.
Threads in the process engine
We can refer to two types of multi-threading: logical and technical. Technical multi-threading involves multiple threads or processes that are started, for example, by a Java or C program. Logical multi-threading happens in a BPM process, for example, after the process reaches a parallel gateway. In execution logic, the original process splits into two processes that run in a parallel fashion.
Process engine code implements logical multi-threading using one technical thread.
The reason for this design choice is that multiple (technical) threads must be able to communicate state information to each other if they are working on the same process. This requirement brings a number of complications. The extra logic required for safe communication between threads, as well as the extra overhead required to avoid race conditions and deadlocks, can negate any performance benefit of using such threads.
In general, the process engine executes actions in series. For example, when the process engine encounters a script task in a process, it executes the script synchronously and waits for it to complete before continuing execution. In the same way, if a process encounters a parallel gateway, the process engine sequentially triggers each of the outgoing branches, one after the other.
This is possible because execution is almost always instantaneous, meaning that it is extremely fast and produces almost no overhead. As a result, sequential execution does not create any effects that a user can notice.
Any code in a process that you supply is also executed synchronously and the process engine waits for it to finish before continuing the process. For example, if you use a Thread.sleep(…)
as part of a custom script, the process engine thread is blocked during the sleep period.
When a process reaches a service task, the process engine also invokes the handler for the task synchronously and waits for the completeWorkItem(…)
method to return before continuing execution. If your service handler is not instantaneous, implement the asynchronous execution independently in your code.
For example, your service task might invoke an external service. The delay in invoking this service remotely and waiting for the results might be significant. Therefore, invoke this service asynchronously. Your handler must only invoke the service and then return from the method, then notify the process engine later when the results are available. In the meantime, the process engine can continue execution of the process.
Human tasks are a typical example of a service that needs to be invoked asynchronously. A human task requires a human actor to respond to a request, and the process engine must not wait for this response.
When a human task node is triggered, the human task handler only creates a new task on the task list of the assigned actor. The process engine is then able to continue execution on the rest of the process, if necessary. The handler notifies the process engine asynchronously when the user has completed the task.
Execution errors in the process engine
Any part of process engine execution, including the task service, can throw an exception. An exception can be any class that extends java.lang.Throwable
.
Some exceptions are handled at the process level. Notably, a work item handler can throw a custom exception that specifies a subprocess for error handling. For information about developing work item handlers, see Custom tasks and work item handlers.
If an exception is not handled and reaches the process engine, it becomes an execution error. When an execution error happens, the process engine rolls back the current transaction and leaves the process in the previous stable state. After that, the process engine continues the execution of the process from that point.
Execution errors are visible to the caller that sent the request to the process engine. The process engine also includes an extendable mechanism for handling execution errors and storing information about them. This mechanism consists of the following components:
-
ExecutionErrorManager
: The entry point for error handling. This class is integrated withRuntimeManager
, which is responsible for providing it to the underlyingKieSession
andTaskService
.ExecutionErrorManager
provides access to other classes in the execution error handling mechanism.When the process engine creates a
RuntimeManager
instance, it also creates a correspondingExecutionErrorManager
instance. -
ExecutionErrorHandler
: The primary class for error handling. This class is implemented in the process engine and you normally do not need to customize or extend it directly.ExecutionErrorHandler
calls error filters to process particular errors and callsExecutionErrorStorage
to store error information.The
ExecutionErrorHandler
is bound to the life cycle ofRuntimeEngine
; it is created when a new runtime engine is created and is destroyed whenRuntimeEngine
is disposed. A single instance of theExecutionErrorHandler
is used within a given execution context or transaction. BothKieSession
andTaskService
use that instance to inform the error handling about processed nodes or tasks.ExecutionErrorHandler
is informed about the following events:-
Starting of processing of a node instance
-
Completion of processing of a node instance
-
Starting of processing of a task instance
-
Completion of processing of a task instance
The
ExecutionErrorHandler
uses this information to record the context for errors, especially if the error itself does not provide process context information. For example, database exceptions do not carry any process information.
-
-
ExecutionErrorStorage
: The pluggable storage class for execution error information.When the process engine creates a
RuntimeManager
instance, it also creates a correspondingExecutionErrorStorage
instance. Then theExecutionErrorHandler
class calls thisExecutionErrorStorage
instance to store information abiout every execution error.The default storage implementation uses a database table to store all the available information for every error. Different detail levels might be available for different error types, as some errors might not permit extraction of detailed information.
-
A number of filters that process particular types of execution errors. You can add custom filters.
By default, every execution error is recorded as unacknowledged. You can use Business Central to view all recorded execution errors and to acknowledge them. You can also create jobs that automatically acknowledge all or some execution errors.
For information about using Business Central to view execution errors and to create jobs that acknowledge the errors automatically, see Managing and monitoring business processes in Business Central.
Execution error types and filters
Execution error handling attempts to catch and handle any kind of error. However, users might need to handle different errors in different ways. Also, different detailed information is available for different types of errors.
The error handling mechanism supports pluggable filters. Every filter processes a particular type of error. You can add filters that process specific errors in different ways, overriding default processing.
A filter is an implementation of the ExecutionErrorFilter
interface. This interface builds instances of ExecutionError
, which are later stored using the ExecutionErrorStorage
class.
The ExecutionErrorFilter
interface has the following methods:
-
accept
: Indicates if an error can be processed by the filter -
filter
: Processes an error and returns theExecutionError
instance -
getPriority
: Indicates the priority for this filter
The execution error handler processes each error separately. For each error, it starts calling the accept
method of all registered filters, starting with the filters that have a lower priority value. If the accept
method of a filter returns true
, the handler calls the filter
method of the filter and does not call any other filters.
Because of the priority system, only one filter processes any error. More specialized filters have lower priority values. An error that is not accepted by any specialized filters reaches generic filters that have higher priority values.
The ServiceLoader
mechanism provides ExecutionErrorFilter
instances. To register custom filters, add their fully qualified class names to the META-INF/services/org.kie.internal.runtime.error.ExecutionErrorFilter
file of your service project.
IBM Business Automation Manager Open Editions ships with the following execution error filters:
Class name | Type | Priority |
---|---|---|
org.jbpm.runtime.manager.impl.error.filters.ProcessExecutionErrorFilter |
Process |
100 |
org.jbpm.runtime.manager.impl.error.filters.TaskExecutionErrorFilter |
Task |
80 |
org.jbpm.runtime.manager.impl.error.filters.DBExecutionErrorFilter |
DB |
200 |
org.jbpm.executor.impl.error.JobExecutionErrorFilter |
Job |
100 |
Filters are given a higher execution order based on the lowest value of the priority. Therefore, the execution error handler invokes these filters in the following order:
-
Task
-
Process
-
Job
-
DB
Event listeners in the process engine
Every time that a process or task changes to a different point in its lifecycle, the process engine generates an event. You can develop a class that receives and processes such events. This class is called an event listener.
The process engine passes an event object to this class. The object provides access to related information. For example, if the event is related to a process node, the object provides access to the process instance and the node instance.
Interfaces for event listeners
You can use the following interfaces to develop event listeners for the process engine.
Interfaces for process event listeners
You can develop a class that implements the ProcessEventListener
interface. This class can listen to process-related events, such as starting or completing a process or entering and leaving a node.
The following source code shows the different methods of the ProcessEventListener
interface:
ProcessEventListener
interfacepublic interface ProcessEventListener
extends
EventListener {
/**
* This listener method is invoked right before a process instance is being started.
* @param event
*/
void beforeProcessStarted(ProcessStartedEvent event);
/**
* This listener method is invoked right after a process instance has been started.
* @param event
*/
void afterProcessStarted(ProcessStartedEvent event);
/**
* This listener method is invoked right before a process instance is being completed (or aborted).
* @param event
*/
void beforeProcessCompleted(ProcessCompletedEvent event);
/**
* This listener method is invoked right after a process instance has been completed (or aborted).
* @param event
*/
void afterProcessCompleted(ProcessCompletedEvent event);
/**
* This listener method is invoked right before a node in a process instance is being triggered
* (which is when the node is being entered, for example when an incoming connection triggers it).
* @param event
*/
void beforeNodeTriggered(ProcessNodeTriggeredEvent event);
/**
* This listener method is invoked right after a node in a process instance has been triggered
* (which is when the node was entered, for example when an incoming connection triggered it).
* @param event
*/
void afterNodeTriggered(ProcessNodeTriggeredEvent event);
/**
* This listener method is invoked right before a node in a process instance is being left
* (which is when the node is completed, for example when it has performed the task it was
* designed for).
* @param event
*/
void beforeNodeLeft(ProcessNodeLeftEvent event);
/**
* This listener method is invoked right after a node in a process instance has been left
* (which is when the node was completed, for example when it performed the task it was
* designed for).
* @param event
*/
void afterNodeLeft(ProcessNodeLeftEvent event);
/**
* This listener method is invoked right before the value of a process variable is being changed.
* @param event
*/
void beforeVariableChanged(ProcessVariableChangedEvent event);
/**
* This listener method is invoked right after the value of a process variable has been changed.
* @param event
*/
void afterVariableChanged(ProcessVariableChangedEvent event);
/**
* This listener method is invoked right before a process/node instance's SLA has been violated.
* @param event
*/
default void beforeSLAViolated(SLAViolatedEvent event) {}
/**
* This listener method is invoked right after a process/node instance's SLA has been violated.
* @param event
*/
default void afterSLAViolated(SLAViolatedEvent event) {}
/**
* This listener method is invoked when a signal is sent
* @param event
*/
default void onSignal(SignalEvent event) {}
/**
* This listener method is invoked when a message is sent
* @param event
*/
default void onMessage(MessageEvent event) {}
}
You can implement any of these methods to process the corresponding event.
For the definition of the event classes that the process engine passes to the methods, see the org.kie.api.event.process
package in the Java documentation.
You can use the methods of the event class to retrieve other classes that contain all information about the entities involved in the event.
The following example is a part of a node-related event, such as afterNodeLeft()
, and retrieves the process instance and node type.
WorkflowProcessInstance processInstance = event.getNodeInstance().getProcessInstance()
NodeType nodeType = event.getNodeInstance().getNode().getNodeType()
Interfaces for task lifecycle event listeners
You can develop a class that implements the TaskLifecycleEventListener
interface. This class can listen to events related to the lifecycle of tasks, such as assignment of an owner or completion of a task.
The following source code shows the different methods of the TaskLifecycleEventListener
interface:
TaskLifecycleEventListener
interfacepublic interface TaskLifeCycleEventListener extends EventListener {
public enum AssignmentType {
POT_OWNER,
EXCL_OWNER,
ADMIN;
}
public void beforeTaskActivatedEvent(TaskEvent event);
public void beforeTaskClaimedEvent(TaskEvent event);
public void beforeTaskSkippedEvent(TaskEvent event);
public void beforeTaskStartedEvent(TaskEvent event);
public void beforeTaskStoppedEvent(TaskEvent event);
public void beforeTaskCompletedEvent(TaskEvent event);
public void beforeTaskFailedEvent(TaskEvent event);
public void beforeTaskAddedEvent(TaskEvent event);
public void beforeTaskExitedEvent(TaskEvent event);
public void beforeTaskReleasedEvent(TaskEvent event);
public void beforeTaskResumedEvent(TaskEvent event);
public void beforeTaskSuspendedEvent(TaskEvent event);
public void beforeTaskForwardedEvent(TaskEvent event);
public void beforeTaskDelegatedEvent(TaskEvent event);
public void beforeTaskNominatedEvent(TaskEvent event);
public default void beforeTaskUpdatedEvent(TaskEvent event){};
public default void beforeTaskReassignedEvent(TaskEvent event){};
public default void beforeTaskNotificationEvent(TaskEvent event){};
public default void beforeTaskInputVariableChangedEvent(TaskEvent event, Map<String, Object> variables){};
public default void beforeTaskOutputVariableChangedEvent(TaskEvent event, Map<String, Object> variables){};
public default void beforeTaskAssignmentsAddedEvent(TaskEvent event, AssignmentType type, List<OrganizationalEntity> entities){};
public default void beforeTaskAssignmentsRemovedEvent(TaskEvent event, AssignmentType type, List<OrganizationalEntity> entities){};
public void afterTaskActivatedEvent(TaskEvent event);
public void afterTaskClaimedEvent(TaskEvent event);
public void afterTaskSkippedEvent(TaskEvent event);
public void afterTaskStartedEvent(TaskEvent event);
public void afterTaskStoppedEvent(TaskEvent event);
public void afterTaskCompletedEvent(TaskEvent event);
public void afterTaskFailedEvent(TaskEvent event);
public void afterTaskAddedEvent(TaskEvent event);
public void afterTaskExitedEvent(TaskEvent event);
public void afterTaskReleasedEvent(TaskEvent event);
public void afterTaskResumedEvent(TaskEvent event);
public void afterTaskSuspendedEvent(TaskEvent event);
public void afterTaskForwardedEvent(TaskEvent event);
public void afterTaskDelegatedEvent(TaskEvent event);
public void afterTaskNominatedEvent(TaskEvent event);
public default void afterTaskReassignedEvent(TaskEvent event){};
public default void afterTaskUpdatedEvent(TaskEvent event){};
public default void afterTaskNotificationEvent(TaskEvent event){};
public default void afterTaskInputVariableChangedEvent(TaskEvent event, Map<String, Object> variables){};
public default void afterTaskOutputVariableChangedEvent(TaskEvent event, Map<String, Object> variables){};
public default void afterTaskAssignmentsAddedEvent(TaskEvent event, AssignmentType type, List<OrganizationalEntity> entities){};
public default void afterTaskAssignmentsRemovedEvent(TaskEvent event, AssignmentType type, List<OrganizationalEntity> entities){};
}
You can implement any of these methods to process the corresponding event.
For the definition of the event class that the process engine passes to the methods, see the org.kie.api.task
package in the Java documentation.
You can use the methods of the event class to retrieve the classes representing the task, task context, and task metadata.
Timing of calls to event listeners
A number of event listener calls are before
and after
events, for example, beforeNodeLeft()
and afterNodeLeft()
, beforeTaskActivatedEvent()
and afterTaskActivatedEvent()
.
The before
and after
event calls typically act like a stack. If event A directly causes event B, the following sequence of calls happens:
-
Before A
-
Before B
-
After B
-
After A
For example, if leaving node X triggers node Y, all event calls related to triggering node Y occur between the beforeNodeLeft
and afterNodeLeft
calls for node X.
In the same way, if starting a process directly causes some nodes to start, all nodeTriggered
and nodeLeft
event calls occur between the beforeProcessStarted
and afterProcessStarted
calls.
This approach reflects cause and effect relationships between events. However, the timing and order of after
event calls are not always intuitive. For example, an afterProcessStarted
call can happen after the afterNodeLeft
calls for some nodes in the process.
In general, to be notified when a particular event occurs, use the before
call for the event. Use an after
call only if you want to make sure that all processing related to this event has ended, for example, when you want to be notified when all steps associated with starting a particular process instance have been completed.
Depending on the type of node, some nodes might only generate nodeLeft
calls and others might only generate nodeTriggered
calls. For example, catch intermediate event nodes do not generate nodeTriggered
calls because they are not triggered by another process node. Similarly, throw intermediate event nodes do not generate nodeLeft
calls because these nodes do not have an outgoing connection to another node.
Practices for development of event listeners
The process engine calls event listeners during processing of events or tasks. The calls happen within process engine transactions and block execution. Therefore, the event listener can affect the logic and performance of the process engine.
To ensure minimal disruption, follow the following guidelines:
-
Any action must be as short as possible.
-
A listener class must not have a state. The process engine can destroy and re-create a listener class at any time.
-
If the listener modifies any resource that exists outside the scope of the listener method, ensure that the resource is enlisted in the current transaction. The transaction might be rolled back. In this case, if the modified resource is not a part of the transaction, the state of the resource becomes inconsistent.
Database-related resources provided by Red Hat JBoss EAP are always enlisted in the current transaction. In other cases, check the JTA information for the runtime environment that you are using.
-
Do not use logic that relies on the order of execution of different event listeners.
-
Do not include interactions with different entities outside the process engine within a listener. For example, do not include REST calls for notification of events. Instead, use process nodes to complete such calls. An exception is the output of logging information; however, a logging listener must be as simple as possible.
-
You can use a listener to modify the state of the process or task that is involved in the event, for example, to change its variables.
-
You can use a listener to interact with the process engine, for example, to send signals or to interact with process instances that are not involved in the event.
Registration of event listeners
The KieSession
class implements the RuleRuntimeEventManager
interface that provides methods for registering, removing, and listing event listeners, as shown in the following list.
RuleRuntimeEventManager
interface void addEventListener(AgendaEventListener listener);
void addEventListener(RuleRuntimeEventListener listener);
void removeEventListener(AgendaEventListener listener);
void removeEventListener(RuleRuntimeEventListener listener);
Collection<AgendaEventListener> getAgendaEventListeners();
Collection<RuleRuntimeEventListener> getRuleRintimeEventListeners();
However, in a typical case, do not use these methods.
If you are using the RuntimeManager
interface, you can use the RuntimeEnvironment
class to register event listeners.
If you are using the Services API, you can add fully qualified class names of event listeners to the META-INF/services/org.jbpm.services.task.deadlines.NotificationListener
file in your project. The Services API also registers some default listeners, including org.jbpm.services.task.deadlines.notifications.impl.email.EmailNotificationListener
, which can send email notifications for events.
To exclude a default listener, you can add the fully qualified name of the listener to the org.kie.jbpm.notification_listeners.exclude
JVM system property.
KieRuntimeLogger
event listener
The KieServices
package contains the KieRuntimeLogger
event listener that you can add to your KIE session. You can use this listener to create an audit log. This log contains all the different events that occurred at runtime.
Note
|
These loggers are intended for debugging purposes. They might be too detailed for business-level process analysis. |
The listener implements the following logger types:
-
Console logger: This logger writes out all the events to the console. The fully qualified class name for this logger is
org.drools.core.audit.WorkingMemoryConsoleLogger
. -
File logger: This logger writes out all the events to a file using an XML representation. You can use the log file in an IDE to generate a tree-based visualization of the events that occurred during execution. The fully qualified class name for this logger is
org.drools.core.audit.WorkingMemoryFileLogger
.The file logger writes the events to disk only when closing the logger or when the number of events in the logger reaches a predefined level. Therefore, it is not suitable for debugging processes at runtime.
-
Threaded file logger: This logger writes the events to a file after a specified time interval. You can use this logger to visualize the progress in real time while debugging processes. The fully qualified class name for this logger is
org.drools.core.audit.ThreadedWorkingMemoryFileLogger
.
When creating a logger, you must pass the KIE session as an argument. The file loggers also require the name of the log file to be created. The threaded file logger requires the interval in milliseconds after which the events are saved.
Always close the logger at the end of your application.
The following example shows the use of the file logger.
import org.kie.api.KieServices;
import org.kie.api.logger.KieRuntimeLogger;
...
KieRuntimeLogger logger = KieServices.Factory.get().getLoggers().newFileLogger(ksession, "test");
// add invocations to the process engine here,
// e.g. ksession.startProcess(processId);
...
logger.close();
The log file that is created by the file-based loggers contains an XML-based overview of all the events that occurred during the runtime of the process.
Process engine configuration
You can use several control parameters available to alter the process engine default behavior to suit the requirements of your environment.
Set these parameters as JVM system properties, usually with the -D
option when starting a program such as an application server.
Name | Possible values | Default value | Description |
---|---|---|---|
|
String |
Alternative JNDI name to be used when there is no access to the default name ( NOTE: The name must be valid for the given runtime environment. Do not use this variable if there is no access to the default user transaction JNDI name. |
|
|
|
|
Enable multiple incoming and outgoing sequence flows support for activities |
|
String |
/ |
Alternative class path location of the business calendar configuration file |
|
Long |
2000 |
Specifies the delay for overdue timers to allow proper initialization, in milliseconds |
|
String |
Alternative comparator class to enable starting a process by name,
by default the |
|
|
|
|
Enable or disable loop iteration tracking for advanced loop support when using XOR gateways |
|
String |
|
Alternative JNDI name for the mail session used by Task Deadlines |
|
String |
/ |
Alternative class path location for a user group callback implementation (LDAP, DB) |
|
String |
|
Alternative location of the |
|
String |
/ |
Alternative class path location of the user info configuration (used by |
|
String |
|
Alternative separator of actors and groups for user tasks |
|
String |
Location of the Quartz configuration file to activate the Quartz-based timer service |
|
|
String |
|
Location to store data files produced by the process engine |
|
Integer |
|
Thread pool size for the process engine executor |
|
Integer |
3 |
Number of retries attempted by the process engine executor in case of an error |
|
Integer |
0 |
Frequency used to check for pending jobs by the process engine executor, in seconds. If the value is |
|
|
|
Disable the process engine executor |
|
String |
|
Fully qualified name of the class that implements |
|
String |
Fully qualified names of event listeners that must be excluded even if they would otherwise be used. Separate multiple names with commas. For example, you can add |
|
|
String |
Fully qualified names of event listeners that must be included. Separate multiple names with commas. If you set this property, only the listeners in this property are included and all other listeners are excluded. |
Persistence and transactions in the process engine
The process engine implements persistence for process states. The implementation uses the JPA framework with an SQL database backend. It can also store audit log information in the database.
The process engine also enables transactional execution of processes using the JTA framework, relying on the persistence backend to support the transactions.
Persistence of process runtime states
The process engine supports persistent storage of the runtime state of running process instances. Because it stores the runtime states, it can continue execution of a process instance if the process engine stopped or encountered a problem at any point.
The process engine also persistently stores the process definitions and the history logs of current and previous process states.
You can use the persistence.xml
file, specified by the JPA framework, to configure persistence in an SQL database. You can plug in different persistence strategies. For more information about the persistence.xml
file, see Configuration in the persistence.xml
file.
By default, if you do not configure persistence in the process engine, process information, including process instance states, is not made persistent.
When the process engine starts a process, it creates a process instance, which represents the execution of the process in that specific context. For example, when executing a process that processes a sales order, one process instance is created for each sales request.
The process instance contains the current runtime state and context of a process, including current values of any process variables. However, it does not include information about the history of past states of the process, as this information is not required for ongoing execution of a process.
When the runtime state of process instances is made persistent, you can restore the state of execution of all running processes in case the process engine fails or is stopped. You can also remove a particular process instance from memory and then restore it at a later time.
If you configure the process engine to use persistence, it automatically stores the runtime state into the database. You do not need to trigger persistence in the code.
When you restore the state of the process engine from a database, all instances are automatically restored to their last recorded state. Process instances automatically resume execution if they are triggered, for example, by an expired timer, the completion of a task that was requested by the process instance, or a signal being sent to the process instance. You do not need to load separate instances and trigger their execution manually.
The process engine also automatically reloads process instances on demand.
Safe points for persistence
The process engine saves the state of a process instance to persistent storage at safe points during the execution of the process.
When a process instance is started or resumes execution from a previous wait state, the process engine continues the execution until no more actions can be performed. If no more actions can be performed, it means that the process has completed or else has reached a wait state. If the process contains several parallel paths, all the paths must reach a wait state.
This point in the execution of the process is considered a safe point. At this point, the process engine stores the state of the process instance, and of any other process instances that were affected by the execution, to persistent storage.
The persistent audit log
The process engine can store information about the execution of process instances, including the successive historical states of the instances.
This information can be useful in many cases. For example, you might want to verify which actions have been executed for a particular process instance or to monitor and analyze the efficiency of a particular process.
However, storing history information in the runtime database would result in the database rapidly increasing in size and would also affect the performance of the persistence layer. Therefore, history log information is stored separately.
The process engine creates a log based on events that it generates during execution of processes. It uses the event listener mechanism to receive events and extract the necessary information, then persists this information to a database. The jbpm-audit
module contains an event listener that stores process-related information in a database using JPA.
You can use filters to limit the scope of the logged information.
The process engine audit log data model
You can query process engine audit log information to use it in different scenarios, for example, creating a history log for one specific process instance or analyzing the performance of all instances of a specific process.
The audit log data model is a default implementation. Depending on your use cases, you might also define your own data model for storing the information you require. You can use process event listeners to extract the information.
The data model contains three entities: one for process instance information, one for node instance information, and one for process variable instance information.
The ProcessInstanceLog
table contains the basic log information about a process instance.
Field | Description | Nullable |
---|---|---|
|
The primary key and ID of the log entity |
NOT NULL |
|
The correlation of this process instance |
|
|
Actual duration of this process instance since its start date |
|
|
When applicable, the end date of the process instance |
|
|
Optional external identifier used to correlate to some elements, for example, a deployment ID |
|
|
Optional identifier of the user who started the process instance |
|
|
The outcome of the process instance. This field contains the error code if the process instance was finished with an error event. |
|
|
The process instance ID of the parent process instance, if applicable |
|
|
The ID of the process |
|
|
The process instance ID |
NOT NULL |
|
The name of the process |
|
|
The type of the instance (process or case) |
|
|
The version of the process |
|
|
The due date of the process according to the service level agreement (SLA) |
|
|
The level of compliance with the SLA |
|
|
The start date of the process instance |
|
|
The status of the process instance that maps to the process instance state |
The NodeInstanceLog
table contains more information about which nodes were executed inside each process instance.
Whenever a node instance is entered from one of its incoming connections or is exited through one of its outgoing connections, information about the event is stored in this table.
Field | Description | Nullable |
---|---|---|
|
The primary key and ID of the log entity |
NOT NULL |
|
Actual identifier of the sequence flow that led to this node instance |
|
|
The date of the event |
|
|
Optional external identifier used to correlate to some elements, for example, a deployment ID |
|
|
The node ID of the corresponding node in the process definition |
|
|
The node instance ID |
|
|
The name of the node |
|
|
The type of the node |
|
|
The ID of the process that the process instance is executing |
|
|
The process instance ID |
NOT NULL |
|
The due date of the node according to the service level agreement (SLA) |
|
|
The level of compliance with the SLA |
|
|
The type of the event (0 = enter, 1 = exit) |
NOT NULL |
|
(Optional, only for certain node types) The identifier of the work item |
|
|
The identifier of the container, if the node is inside an embedded sub-process node |
|
|
The reference identifier |
|
|
The original node instance ID and job ID, if the node is of the scheduled event type. You can use this information to trigger the job again. |
The VariableInstanceLog
table contains information about changes in variable instances. By default, the process engine generates log entries after a variable changes its value. The process engine can also log entries before the changes.
Field | Description | Nullable |
---|---|---|
|
The primary key and ID of the log entity |
NOT NULL |
|
Optional external identifier used to correlate to some elements, for example, a deployment ID |
|
|
The date of the event |
|
|
The ID of the process that the process instance is executing |
|
|
The process instance ID |
NOT NULL |
|
The previous value of the variable at the time that the log is made |
|
|
The value of the variable at the time that the log is made |
|
|
The variable ID in the process definition |
|
|
The ID of the variable instance |
The AuditTaskImpl
table contains information about user tasks.
Field | Description | Nullable |
---|---|---|
|
The primary key and ID of the task log entity |
|
|
Time when this task was activated |
|
|
Actual owner assigned to this task. This value is set only when the owner claims the task. |
|
|
User who created this task |
|
|
Date when the task was created |
|
|
The ID of the deployment of which this task is a part |
|
|
Description of the task |
|
|
Due date set on this task |
|
|
Name of the task |
|
|
Parent task ID |
|
|
Priority of the task |
|
|
Process definition ID to which this task belongs |
|
|
Process instance ID with which this task is associated |
|
|
KIE session ID used to create this task |
|
|
Current status of the task |
|
|
Identifier of the task |
|
|
Identifier of the work item assigned on the process side to this task ID |
|
|
The date and time when the process instance state was last recorded in the persistence database |
The BAMTaskSummary
table collects information about tasks that is used by the BAM engine to build charts and dashboards.
Field | Description | Nullable |
---|---|---|
|
The primary key and ID of the log entity |
NOT NULL |
|
Date when the task was created |
|
|
Duration since the task was created |
|
|
Date when the task reached an end state (complete, exit, fail, skip) |
|
|
The process instance ID |
|
|
Date when the task was started |
|
|
Current status of the task |
|
|
Identifier of the task |
|
|
Name of the task |
|
|
User ID assigned to the task |
|
|
The version field that serves as its optimistic lock value |
The TaskVariableImpl
table contains information about task variable instances.
Field | Description | Nullable |
---|---|---|
|
The primary key and ID of the log entity |
NOT NULL |
|
Date when the variable was modified most recently |
|
|
Name of the task |
|
|
The ID of the process that the process instance is executing |
|
|
The process instance ID |
|
|
Identifier of the task |
|
|
Type of the variable: either input or output of the task |
|
|
Variable value |
The TaskEvent
table contains information about changes in task instances.
Operations such as claim
, start
, and stop
are stored in this table to provide a timeline view of events that happened to the given task.
Field | Description | Nullable |
---|---|---|
|
The primary key and ID of the log entity |
NOT NULL |
|
Date when this event was saved |
|
|
Log event message |
|
|
The process instance ID |
|
|
Identifier of the task |
|
|
Type of the event. Types correspond to life cycle phases of the task |
|
|
User ID assigned to the task |
|
|
Identifier of the work item to which the task is assigned |
|
|
The version field that serves as its optimistic lock value |
|
|
Correlation key of the process instance |
|
|
Type of the process instance (process or case) |
|
|
The current owner of the task |
Configuration for storing the process events log in a database
To log process history information in a database with a default data model, you must register the logger on your session.
KieSession ksession = ...;
ksession.addProcessEventListener(AuditLoggerFactory.newInstance(Type.JPA, ksession, null));
// invoke methods for your session here
To specify the database for storing the information, you must modify the persistence.xml
file to include the audit log classes: ProcessInstanceLog
, NodeInstanceLog
, and VariableInstanceLog
.
persistence.xml
file that includes the audit log classes<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<persistence
version="2.0"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd
http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd"
xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:orm="http://java.sun.com/xml/ns/persistence/orm"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<persistence-unit name="org.jbpm.persistence.jpa" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>jdbc/jbpm-ds</jta-data-source>
<mapping-file>META-INF/JBPMorm.xml</mapping-file>
<class>org.drools.persistence.info.SessionInfo</class>
<class>org.jbpm.persistence.processinstance.ProcessInstanceInfo</class>
<class>org.drools.persistence.info.WorkItemInfo</class>
<class>org.jbpm.persistence.correlation.CorrelationKeyInfo</class>
<class>org.jbpm.persistence.correlation.CorrelationPropertyInfo</class>
<class>org.jbpm.runtime.manager.impl.jpa.ContextMappingInfo</class>
<class>org.jbpm.process.audit.ProcessInstanceLog</class>
<class>org.jbpm.process.audit.NodeInstanceLog</class>
<class>org.jbpm.process.audit.VariableInstanceLog</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>
<property name="hibernate.max_fetch_depth" value="3"/>
<property name="hibernate.hbm2ddl.auto" value="update"/>
<property name="hibernate.show_sql" value="true"/>
<property name="hibernate.connection.release_mode" value="after_transaction"/>
<property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossStandAloneJtaPlatform"/>
</properties>
</persistence-unit>
</persistence>
Configuration for sending the process events log to a JMS queue
When the process engine stores events in the database with the default audit log implementation, the database operation is completed synchronously, within the same transaction as the actual execution of the process instance. This operation takes time, and on highly loaded systems it might have some impact on database performance, especially when both the history log and the runtime data are stored in the same database.
As an alternative, you can use the JMS-based logger that the process engine provides. You can configure this logger to submit process log entries as messages to a JMS queue, instead of directly persisting them in the database.
You can configure the JMS logger to be transactional, in order to avoid data inconsistencies if a process engine transaction is rolled back.
ConnectionFactory factory = ...;
Queue queue = ...;
StatefulKnowledgeSession ksession = ...;
Map<String, Object> jmsProps = new HashMap<String, Object>();
jmsProps.put("jbpm.audit.jms.transacted", true);
jmsProps.put("jbpm.audit.jms.connection.factory", factory);
jmsProps.put("jbpm.audit.jms.queue", queue);
ksession.addProcessEventListener(AuditLoggerFactory.newInstance(Type.JMS, ksession, jmsProps));
// invoke methods one your session here
This is just one of the possible ways to configure JMS audit logger. You can use the AuditLoggerFactory
class to set additional configuration parameters.
Auditing of variables
By default, values of process and task variables are stored in audit tables as string representations. To create string representations of non-string variable types, the process engine calls the variable.toString()
method. If you use a custom class for a variable, you can implement this method for the class. In many cases this representation is sufficient.
However, sometimes a string representation in the logs might not be sufficient, especially when there is a need for efficient queries by process or task variables. For example, a Person
object, used as a value for a variable, might have the following structure:
Person
object, used as a process or task variable valuepublic class Person implements Serializable {
private static final long serialVersionUID = -5172443495317321032L;
private String name;
private int age;
public Person(String name, int age) {
this.name = name;
this.age = age;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
@Override
public String toString() {
return "Person [name=" + name + ", age=" + age + "]";
}
}
The toString()
method provides a human-readable format. However, it might not be sufficient for a search. A sample string value is Person [name="john", age="34"]
. Searching through a large number of such strings to find people of age 34 would make a database query inefficient.
To enable more efficient searching, you can audit variables using VariableIndexer
objects, which extract relevant parts of the variable for storage in the audit log.
VariableIndexer
interface/**
* Variable indexer that transforms a variable instance into another representation (usually string)
* for use in log queries.
*
* @param <V> type of the object that will represent the indexed variable
*/
public interface VariableIndexer<V> {
/**
* Tests if this indexer can index a given variable
*
* NOTE: only one indexer can be used for a given variable
*
* @param variable variable to be indexed
* @return true if the variable should be indexed with this indexer
*/
boolean accept(Object variable);
/**
* Performs an index/transform operation on the variable. The result of this operation can be
* either a single value or a list of values, to support complex type separation.
* For example, when the variable is of the type Person that has name, address, and phone fields,
* the indexer could build three entries out of it to represent individual fields:
* person = person.name
* address = person.address.street
* phone = person.phone
* this configuration allows advanced queries for finding relevant entries.
* @param name name of the variable
* @param variable actual variable value
* @return
*/
List<V> index(String name, Object variable);
}
The default indexer uses the toString()
method to produce a single audit entry for a single variable. Other indexers can return a list of objects from indexing a single variable.
To enable efficient queries for the Person
type, you can build a custom indexer that indexes a Person
instance into separate audit entries, one representing the name and another representing the age.
Person
typepublic class PersonTaskVariablesIndexer implements TaskVariableIndexer {
@Override
public boolean accept(Object variable) {
if (variable instanceof Person) {
return true;
}
return false;
}
@Override
public List<TaskVariable> index(String name, Object variable) {
Person person = (Person) variable;
List<TaskVariable> indexed = new ArrayList<TaskVariable>();
TaskVariableImpl personNameVar = new TaskVariableImpl();
personNameVar.setName("person.name");
personNameVar.setValue(person.getName());
indexed.add(personNameVar);
TaskVariableImpl personAgeVar = new TaskVariableImpl();
personAgeVar.setName("person.age");
personAgeVar.setValue(person.getAge()+"");
indexed.add(personAgeVar);
return indexed;
}
}
The process engine can use this indexer to index values when they are of the Person
type, while all other variables are indexed with the default toString()
method. Now, to query for process instances or tasks that refer to a person with age 34, you can use the following query:
-
variable name:
person.age
-
variable value:
34
As a LIKE
type query is not used, the database server can optimize the query and make it efficient on a large set of data.
Custom indexers
The process engine supports indexers for both process and task variables. However, it uses different interfaces for the indexers, because they must produce different types of objects that represent an audit view of the variable.
You must implement the following interfaces to build custom indexers:
-
For process variables:
org.kie.internal.process.ProcessVariableIndexer
-
For task variables:
org.kie.internal.task.api.TaskVariableIndexer
You must implement two methods for either of the interfaces:
-
accept
: Indicates whether a type is handled by this indexer. The process engine expects that only one indexer can index a given variable value, so it uses the first indexer that accepts the type. -
index
: Indexes a value, producing a object or list of objects (usually strings) for inclusion in the audit log.
After implementing the interface, you must package this implementation as a JAR file and list the implementation in one of the following files:
-
For process variables, the
META-INF/services/org.kie.internal.process.ProcessVariableIndexer
file, which lists fully qualified class names of process variable indexers (single class name per line) -
For task variables, the
META-INF/services/org.kie.internal.task.api.TaskVariableIndexer
file, which lists fully qualified class names of task variable indexers (single class name per line)
The ServiceLoader
mechanism discovers the indexers using these files. When indexing a process or task variable, the process engine examines the registered indexers to find any indexer that accepts the value of the variable. If no other indexer accepts the value, the process engine applies the default indexer that uses the toString()
method.
Transactions in the process engine
The process engine supports Java Transaction API (JTA) transactions.
The current version of the process engine does not support pure local transactions.
If you do not provide transaction boundaries inside your application, the process engine automatically executes each method invocation on the process engine in a separate transaction.
Optionally, you can specify the transaction boundaries in the application code, for example, to combine multiple commands into one transaction.
Registration of a transaction manager
You must register a transaction manager in the environment to use user-defined transactions.
The following sample code registers the transaction manager and uses JTA calls to specify transaction boundaries.
// Create the entity manager factory
EntityManagerFactory emf = EntityManagerFactoryManager.get().getOrCreate("org.jbpm.persistence.jpa");
TransactionManager tm = TransactionManagerServices.getTransactionManager();
// Set up the runtime environment
RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get()
.newDefaultBuilder()
.addAsset(ResourceFactory.newClassPathResource("MyProcessDefinition.bpmn2"), ResourceType.BPMN2)
.addEnvironmentEntry(EnvironmentName.TRANSACTION_MANAGER, tm)
.get();
// Get the KIE session
RuntimeManager manager = RuntimeManagerFactory.Factory.get().newPerRequestRuntimeManager(environment);
RuntimeEngine runtime = manager.getRuntimeEngine(ProcessInstanceIdContext.get());
KieSession ksession = runtime.getKieSession();
// Start the transaction
UserTransaction ut = InitialContext.doLookup("java:comp/UserTransaction");
ut.begin();
// Perform multiple commands inside one transaction
ksession.insert( new Person( "John Doe" ) );
ksession.startProcess("MyProcess");
// Commit the transaction
ut.commit();
You must provide a jndi.properties
file in you root class path to create a JNDI InitialContextFactory
object, because transaction-related objects like UserTransaction
, TransactionManager
, and TransactionSynchronizationRegistry
are registered in JNDI.
If your project includes the jbpm-test
module, this file is already included by default.
Otherwise, you must create the jndi.properties
file with the following content:
jndi.properties
filejava.naming.factory.initial=org.jbpm.test.util.CloseSafeMemoryContextFactory
org.osjava.sj.root=target/test-classes/config
org.osjava.jndi.delimiter=/
org.osjava.sj.jndi.shared=true
This configuration assumes that the simple-jndi:simple-jndi
artifact is present in the class path of your project. You can also use a different JNDI implementation.
By default, the Narayana JTA transaction manager is used. If you want to use a different JTA transaction manager, you can change the persistence.xml
file to use the required transaction manager. For example, if your application runs on Red Hat JBoss EAP version 7 or later, you can use the JBoss transaction manager. In this case, change the transaction manager property in the persistence.xml
file:
persistence.xml
file for the JBoss transaction manager<property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform" />
Warning
|
Using the Singleton strategy of the To avoid this race condition, explicitly synchronize around the
|
Configuring container-managed transactions
If you embed the process engine in an application that executes in container-managed transaction (CMT) mode, for example, EJB beans, you must complete additional configuration. This configuration is especially important if the application runs on an application server that does not allow a CMT application to access a UserTransaction
instance from JNDI, for example, WebSphere Application Server.
The default transaction manager implementation in the process engine relies on UserTransaction
to query transaction status and then uses the status to determine whether to start a transaction. In environments that prevent access to a UserTransaction
instance, this implementation fails.
To enable proper execution in CMT environments, the process engine provides a dedicated transaction manager implementation:
org.jbpm.persistence.jta.ContainerManagedTransactionManager
. This transaction manager expects that the transaction is active and always returns ACTIVE
when the getStatus()
method is invoked. Operations such as begin
, commit
, and rollback
are no-op methods, because the transaction manager cannot affect these operations in container-managed transaction mode.
Note
|
During process execution your code must propagate any exceptions thrown by the engine to the container to ensure that the container rolls transactions back when necessary. |
To configure this transaction manager, complete the steps in this procedure.
-
In your code, insert the transaction manager and persistence context manager into the environment before creating or loading a session:
Inserting the transaction manager and persistence context manager into the environmentEnvironment env = EnvironmentFactory.newEnvironment(); env.set(EnvironmentName.ENTITY_MANAGER_FACTORY, emf); env.set(EnvironmentName.TRANSACTION_MANAGER, new ContainerManagedTransactionManager()); env.set(EnvironmentName.PERSISTENCE_CONTEXT_MANAGER, new JpaProcessPersistenceContextManager(env)); env.set(EnvironmentName.TASK_PERSISTENCE_CONTEXT_MANAGER, new JPATaskPersistenceContextManager(env));
-
In the
persistence.xml
file, configure the JPA provider. The following example useshibernate
and WebSphere Application Server.Configuring the JPA provider in thepersistence.xml
file<property name="hibernate.transaction.factory_class" value="org.hibernate.transaction.CMTTransactionFactory"/> <property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.WebSphereJtaPlatform"/>
-
To dispose a KIE session, do not dispose it directly. Instead, execute the
org.jbpm.persistence.jta.ContainerManagedTransactionDisposeCommand
command. This commands ensures that the session is disposed at the completion of the current transaction. In the following example,ksession
is theKieSession
object that you want to dispose.Disposing a KIE session using theContainerManagedTransactionDisposeCommand
commandksession.execute(new ContainerManagedTransactionDisposeCommand());
Directly disposing the session causes an exception at the completion of the transaction, because the process engine registers transaction synchronization to clean up the session state.
Transaction retries
When the process engine commits a transaction, sometimes the commit operation fails because another transaction is being committed at the same time. In this case, the process engine must retry the transaction.
If several retries fail, the transaction fails permanently.
You can use JVM system properties to control the retrying process.
Property | Values | Default | Description |
---|---|---|---|
|
Integer |
5 |
This property describes how many times the process engine retries a transaction before failing permanently. |
|
Integer |
50 |
The delay time before the first retry, in milliseconds. |
|
Integer |
4 |
The multiplier for increasing the delay time for each subsequent retry. With the default values, the process engine waits 50 milliseconds before the first retry, 200 milliseconds before the second retry, 800 milliseconds before the third retry, and so on. |
Configuration of persistence in the process engine
If you use the process engine without configuring any persistence, it does not save runtime data to any database; no in-memory database is available by default. You can use this mode if it is required for performance reasons or when you want to manage persistence yourself.
To use JPA persistence in the process engine, you must configure it.
Configuration usually requires adding the necessary dependencies, configuring a data source, and creating the process engine classes with persistence configured.
Configuration in the persistence.xml
file
To use JPA persistence, you must add a persistence.xml
persistence configuration to your class path to configure JPA to use Hibernate and the H2 database (or any other database that you prefer). Place this file in the META-INF
directory of your project.
persistence.xml
file<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<persistence
version="2.0"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd
http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd"
xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:orm="http://java.sun.com/xml/ns/persistence/orm"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<persistence-unit name="org.jbpm.persistence.jpa" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>jdbc/jbpm-ds</jta-data-source>
<mapping-file>META-INF/JBPMorm.xml</mapping-file>
<class>org.drools.persistence.info.SessionInfo</class>
<class>org.jbpm.persistence.processinstance.ProcessInstanceInfo</class>
<class>org.drools.persistence.info.WorkItemInfo</class>
<class>org.jbpm.persistence.correlation.CorrelationKeyInfo</class>
<class>org.jbpm.persistence.correlation.CorrelationPropertyInfo</class>
<class>org.jbpm.runtime.manager.impl.jpa.ContextMappingInfo</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>
<property name="hibernate.max_fetch_depth" value="3"/>
<property name="hibernate.hbm2ddl.auto" value="update"/>
<property name="hibernate.show_sql" value="true"/>
<property name="hibernate.connection.release_mode" value="after_transaction"/>
<property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossStandAloneJtaPlatform"/>
</properties>
</persistence-unit>
</persistence>
The example refers to a jdbc/jbpm-ds
data source. For instructions about configuring a data source, see Configuration of data sources for process engine persistence.
Configuration of data sources for process engine persistence
To configure JPA persistence in the process engine, you must provide a data source, which represents a database backend.
If you run your application in an application server, such as Red Hat JBoss EAP, you can use the application server to set up data sources, for example, by adding a data source configuration file in the deploy
directory. For instructions about creating data sources, see the documentation for the application server.
If you deploy your application to Red Hat JBoss EAP, you can create a data source by creating a configuration file in the deploy
directory:
<?xml version="1.0" encoding="UTF-8"?>
<datasources>
<local-tx-datasource>
<jndi-name>jdbc/jbpm-ds</jndi-name>
<connection-url>jdbc:h2:tcp://localhost/~/test</connection-url>
<driver-class>org.h2.jdbcx.JdbcDataSource</driver-class>
<user-name>sa</user-name>
<password></password>
</local-tx-datasource>
</datasources>
If your application runs in a plain Java environment, you can use Narayana and Tomcat DBCP by using the DataSourceFactory
class from the kie-test-util
module supplied by IBM Business Automation Manager Open Editions. See the following code fragment. This example uses the H2 in-memory database in combination with Narayana and Tomcat DBCP.
Properties driverProperties = new Properties();
driverProperties.put("user", "sa");
driverProperties.put("password", "sa");
driverProperties.put("url", "jdbc:h2:mem:jbpm-db;MVCC=true");
driverProperties.put("driverClassName", "org.h2.Driver");
driverProperties.put("className", "org.h2.jdbcx.JdbcDataSource");
PoolingDataSourceWrapper pdsw = DataSourceFactory.setupPoolingDataSource("jdbc/jbpm-ds", driverProperties);
Dependencies for persistence
Persistence requires certain JAR artifact dependencies.
The jbpm-persistence-jpa.jar
file is always required. This file contains the code for saving the runtime state whenever necessary.
Depending on the persistence solution and database you are using, you might need additional dependencies. The default configuration combination includes the following components:
-
Hibernate as the JPA persistence provider
-
H2 in-memory database
-
Narayana for JTA-based transaction management
-
Tomcat DBCP for connection pooling capabilities
This configuration requires the following additional dependencies:
-
jbpm-persistence-jpa
(org.jbpm
) -
drools-persistence-jpa
(org.drools
) -
persistence-api
(javax.persistence
) -
hibernate-entitymanager
(org.hibernate
) -
hibernate-annotations
(org.hibernate
) -
hibernate-commons-annotations
(org.hibernate
) -
hibernate-core
(org.hibernate
) -
commons-collections
(commons-collections
) -
dom4j
(org.dom4j
) -
jta
(javax.transaction
) -
narayana-jta
(org.jboss.narayana.jta
) -
tomcat-dbcp
(org.apache.tomcat
) -
jboss-transaction-api_1.2_spec
(org.jboss.spec.javax.transaction
) -
javassist
(javassist
) -
slf4j-api
(org.slf4j
) -
slf4j-jdk14
(org.slf4j
) -
simple-jndi
(simple-jndi
) -
h2
(com.h2database
) -
jbpm-test
(org.jbpm
) only for testing, do not include this artifact in the production application
Creating a KIE session with persistence
If your code creates KIE sessions directly, you can use the JPAKnowledgeService
class to create your KIE session. This approach provides full access to the underlying configuration.
-
Create a KIE session using the
JPAKnowledgeService
class, based on a KIE base, a KIE session configuration (if necessary), and an environment. The environment must contain a reference to the Entity Manager Factory that you use for persistence.Creating a KIE session with persistence// create the entity manager factory and register it in the environment EntityManagerFactory emf = Persistence.createEntityManagerFactory( "org.jbpm.persistence.jpa" ); Environment env = KnowledgeBaseFactory.newEnvironment(); env.set( EnvironmentName.ENTITY_MANAGER_FACTORY, emf ); // create a new KIE session that uses JPA to store the runtime state StatefulKnowledgeSession ksession = JPAKnowledgeService.newStatefulKnowledgeSession( kbase, null, env ); int sessionId = ksession.getId(); // invoke methods on your method here ksession.startProcess( "MyProcess" ); ksession.dispose();
-
To re-create a session from the database based on a specific session ID, use the
JPAKnowledgeService.loadStatefulKnowledgeSession()
method:Re-creating a KIE session from the persistence database// re-create the session from database using the sessionId ksession = JPAKnowledgeService.loadStatefulKnowledgeSession(sessionId, kbase, null, env );
Persistence in the runtime manager
If your code uses the RuntimeManager
class, use the RuntimeEnvironmentBuilder
class to configure the environment for persistence. By default, the runtime manager searches for the org.jbpm.persistence.jpa
persistence unit.
The following example creates a KieSession
with an empty context.
RuntimeEnvironmentBuilder builder = RuntimeEnvironmentBuilder.Factory.get()
.newDefaultBuilder()
.knowledgeBase(kbase);
RuntimeManager manager = RuntimeManagerFactory.Factory.get()
.newSingletonRuntimeManager(builder.get(), "com.sample:example:1.0");
RuntimeEngine engine = manager.getRuntimeEngine(EmptyContext.get());
KieSession ksession = engine.getKieSession();
The previous example requires a KIE base as the kbase
parameter. You can use a kmodule.xml
KJAR descriptor on the class path to build the KIE base.
kmodule.xml
KJAR descriptorKieServices ks = KieServices.Factory.get();
KieContainer kContainer = ks.getKieClasspathContainer();
KieBase kbase = kContainer.getKieBase("kbase");
A kmodule.xml
descriptor file can include an attribute for resource packages to scan to find and deploy process engine workflows.
kmodule.xml
descriptor file<kmodule xmlns="http://jboss.org/kie/6.0.0/kmodule">
<kbase name="kbase" packages="com.sample"/>
</kmodule>
To control the persistence, you can use the RuntimeEnvironmentBuilder::entityManagerFactory
methods.
EntityManagerFactory emf = Persistence.createEntityManagerFactory("org.jbpm.persistence.jpa");
RuntimeEnvironment runtimeEnv = RuntimeEnvironmentBuilder.Factory
.get()
.newDefaultBuilder()
.entityManagerFactory(emf)
.knowledgeBase(kbase)
.get();
StatefulKnowledgeSession ksession = (StatefulKnowledgeSession) RuntimeManagerFactory.Factory.get()
.newSingletonRuntimeManager(runtimeEnv)
.getRuntimeEngine(EmptyContext.get())
.getKieSession();
After creating the ksession
KIE session in this example, you can call methods in ksession
, for example, StartProcess()
. The process engine persists the runtime state in the configured data source.
You can restore a process instance from persistent storage by using the process instance ID. The runtime manager automatically re-creates the required session.
RuntimeEngine runtime = manager.getRuntimeEngine(ProcessInstanceIdContext.get(processInstanceId));
KieSession session = runtime.getKieSession();
Persisting process variables in a separate database schema in IBM Business Automation Manager Open Editions
When you create process variables to use within the processes that you define, IBM Business Automation Manager Open Editions stores those process variables as binary data in a default database schema. You can persist process variables in a separate database schema for greater flexibility in maintaining and implementing your process data.
For example, persisting your process variables in a separate database schema can help you perform the following tasks:
-
Maintain process variables in human-readable format
-
Make the variables available to services outside of IBM Business Automation Manager Open Editions
-
Clear the log of the default database tables in IBM Business Automation Manager Open Editions without losing process variable data
Note
|
This procedure applies to process variables only. This procedure does not apply to case variables. |
-
You have defined processes in IBM Business Automation Manager Open Editions for which you want to implement variables.
-
If you want to persist variables in a database schema outside of IBM Business Automation Manager Open Editions, you have created a data source and the separate database schema that you want to use. For information about creating data sources, see Configuring Business Central settings and properties.
-
In the data object file that you use as a process variable, add the following elements to configure variable persistence:
Example Person.java object configured for variable persistence@javax.persistence.Entity //(1) @javax.persistence.Table(name = "Person") //(2) public class Person extends org.drools.persistence.jpa.marshaller.VariableEntity //(3) implements java.io.Serializable { //(4) static final long serialVersionUID = 1L; @javax.persistence.GeneratedValue(strategy = javax.persistence.GenerationType.AUTO, generator = "PERSON_ID_GENERATOR") @javax.persistence.Id //(5) @javax.persistence.SequenceGenerator(name = "PERSON_ID_GENERATOR", sequenceName = "PERSON_ID_SEQ") private java.lang.Long id; private java.lang.String name; private java.lang.Integer age; public Person() { } public java.lang.Long getId() { return this.id; } public void setId(java.lang.Long id) { this.id = id; } public java.lang.String getName() { return this.name; } public void setName(java.lang.String name) { this.name = name; } public java.lang.Integer getAge() { return this.age; } public void setAge(java.lang.Integer age) { this.age = age; } public Person(java.lang.Long id, java.lang.String name, java.lang.Integer age) { this.id = id; this.name = name; this.age = age; } }
-
Configures the data object as a persistence entity.
-
Defines the database table name used for the data object.
-
Creates a separate
MappedVariable
mapping table that maintains the relationship between this data object and the associated process instance. If you do not need this relationship maintained, you do not need to extend theVariableEntity
class. Without this extension, the data object is still persisted, but contains no additional data. -
Configures the data object as a serializable object.
-
Sets a persistence ID for the object.
To make the data object persistable using Business Central, navigate to the data object file in your project, click the Persistence icon in the upper-right corner of the window, and configure the persistence behavior:
Figure 2. Persistence configuration in Business Central -
-
In the
pom.xml
file of your project, add the following dependency for persistence support. This dependency contains theVariableEntity
class that you configured in your data object.Project dependency for persistence<dependency> <groupId>org.drools</groupId> <artifactId>drools-persistence-jpa</artifactId> <version>${bamoe.version}</version> <scope>provided</scope> </dependency>
-
In the
~/META-INF/kie-deployment-descriptor.xml
file of your project, configure the JPA marshalling strategy and a persistence unit to be used with the marshaller. The JPA marshalling strategy and persistence unit are required for objects defined as entities.JPA marshaller and persistence unit configured in the kie-deployment-descriptor.xml file<marshalling-strategy> <resolver>mvel</resolver> <identifier>new org.drools.persistence.jpa.marshaller.JPAPlaceholderResolverStrategy("myPersistenceUnit", classLoader)</identifier> <parameters/> </marshalling-strategy>
-
In the
~/META-INF
directory of your project, create apersistence.xml
file that specifies in which data source you want to persist the process variable:Example persistence.xml file with data source configuration<persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:orm="http://java.sun.com/xml/ns/persistence/orm" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="2.0" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd"> <persistence-unit name="myPersistenceUnit" transaction-type="JTA"> <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider> <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source> //(1) <class>org.space.example.Person</class> <exclude-unlisted-classes>true</exclude-unlisted-classes> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/> <property name="hibernate.max_fetch_depth" value="3"/> <property name="hibernate.hbm2ddl.auto" value="update"/> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.id.new_generator_mappings" value="false"/> <property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform"/> </properties> </persistence-unit> </persistence>
-
Sets the data source in which the process variable is persisted
To configure the marshalling strategy, persistence unit, and data source using Business Central, navigate to project Settings → Deployments → Marshalling Strategies and to project Settings → Persistence:
Figure 3. JPA marshaller configuration in Business CentralFigure 4. Persistence unit and data source configuration in Business Central -
Integration with Java frameworks
You can integrate the process engine with several industry-standard Java frameworks, such as Apache Maven, CDI, Spring, and EJB.
Integration with Apache Maven
The process engine uses Maven for two main purposes:
-
To create KJAR artifacts, which are deployment units that the process engine can install into a runtime environment for execution
-
To manage dependencies for building applications that embed the process engine
Maven artifacts as deployment units
The process engine provides a mechanism to deploy processes from Apache Maven artifacts. These artifacts are in the JAR file format and are known as KJAR files, or informally KJARs. A KJAR file includes a descriptor that defines a KIE base and KIE session. It also contains the business assets, including process definitions, that the process engine can load into the KIE base.
The descriptor of a KJAR file is represented by an XML file named kie-deployment-descriptor.xml
. The descriptor can be empty, in which case the default configuration applies. It can also provide custom configuration for the KIE base and KIE session.
kie-deployment-descriptor.xml
descriptor<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<deployment-descriptor xsi:schemaLocation="http://www.jboss.org/jbpm deployment-descriptor.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<persistence-unit>org.jbpm.domain</persistence-unit>
<audit-persistence-unit>org.jbpm.domain</audit-persistence-unit>
<audit-mode>JPA</audit-mode>
<persistence-mode>JPA</persistence-mode>
<runtime-strategy>SINGLETON</runtime-strategy>
<marshalling-strategies/>
<event-listeners/>
<task-event-listeners/>
<globals/>
<work-item-handlers />
<environment-entries/>
<configurations/>
<required-roles/>
<remoteable-classes/>
</deployment-descriptor>
With an empty kie-deployment-descriptor.xml
descriptor, the following default configuration applies:
-
A single default KIE base is created with the following characteristics:
-
It contains all assets from all packages in the KJAR file
-
Its event processing mode is set to
cloud
-
Its equality behaviour is set to
identity
-
Its declarative agenda is disabled
-
For CDI applications, its scope is set to
ApplicationScope
-
-
A single default stateless KIE session is created with the following characteristics:
-
It is bound to the single KIE base
-
Its clock type is set to
real time
-
For CDI applications, its scope is set to
ApplicationScope
-
-
A single default stateful KIE session is created with the following characteristics:
-
It is bound to the single KIE base
-
Its clock type is set to
real time
-
For CDI applications, its scope is set to
ApplicationScope
-
If you do not want to use the defaults, you can change all configuration settings using the kie-deployment-descriptor.xml
file. You can find the complete specification of all elements for this file in the XSD schema.
The following sample shows a custom kie-deployment-descriptor.xml
file that configures the runtime engine. This example configures the most common options and includes a single work item handler. You can also use the kie-deployment-descriptor.xml
file to configure other options.
kie-deployment-descriptor.xml
file<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<deployment-descriptor xsi:schemaLocation="http://www.jboss.org/jbpm deployment-descriptor.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<persistence-unit>org.jbpm.domain</persistence-unit>
<audit-persistence-unit>org.jbpm.domain</audit-persistence-unit>
<audit-mode>JPA</audit-mode>
<persistence-mode>JPA</persistence-mode>
<runtime-strategy>SINGLETON</runtime-strategy>
<marshalling-strategies/>
<event-listeners/>
<task-event-listeners/>
<globals/>
<work-item-handlers>
<work-item-handler>
<resolver>mvel</resolver>
<identifier>new org.jbpm.process.workitem.bpmn2.ServiceTaskHandler(ksession, classLoader)</identifier>
<parameters/>
<name>Service Task</name>
</work-item-handler>
</work-item-handlers>
<environment-entries/>
<configurations/>
<required-roles/>
<remoteable-classes/>
</deployment-descriptor>
Note
|
If you use the |
You can reference KJAR artifacts, like any other Maven artifacts, using the GAV (group, artifact, version) value. When deploying units from KJAR files, the process engine uses the GAV value as the release ID in the KIE API. You can use the GAV value to deploy KJAR artifacts into a runtime environment, for example, a KIE Server.
Dependency management with Maven
When you build projects that embed the process engine, use Apache Maven to configure all dependencies required by the process engine.
The process engine provides a set of BOMs (Bills of Material) to simplify declaring artifact dependencies.
Use the top-level pom.xml
file of your project to define dependency management for embedding the process engine, as shown in the following example. The example includes the main runtime dependencies, which are applicable whether the application is deployed on an application server, in a servlet container, or as a standalone application.
This example also includes version properties for components that applications using the process engine commonly need. Adjust the list of components and versions as necessary. You can view the third-party dependency versions that the product team tests in the parent pom.xml
file in the Github repository.
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<version.org.drools>
</version.org.drools>
<version.org.jbpm>{MAVEN_ARTIFACT_VERSION}</version.org.jbpm>
<hibernate.version>5.3.17.Final</hibernate.version>
<hibernate.core.version>5.3.17.Final</hibernate.core.version>
<slf4j.version>1.7.26</slf4j.version>
<jboss.javaee.version>1.0.0.Final</jboss.javaee.version>
<logback.version>1.2.9</logback.version>
<h2.version>1.3.173</h2.version>
<narayana.version>5.9.0.Final</narayana.version>
<jta.version>1.0.1.Final</jta.version>
<junit.version>4.13.1</junit.version>
</properties>
<dependencyManagement>
<dependencies>
<!-- define Drools BOM -->
<dependency>
<groupId>org.drools</groupId>
<artifactId>drools-bom</artifactId>
<type>pom</type>
<version>${version.org.drools}</version>
<scope>import</scope>
</dependency>
<!-- define jBPM BOM -->
<dependency>
<groupId>org.jbpm</groupId>
<artifactId>jbpm-bom</artifactId>
<type>pom</type>
<version>${version.org.jbpm}</version>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
In modules that use the process engine Java API (KIE API), declare the necessary process engine dependencies and other components that the modules require, as in the following example:
<dependency>
<groupId>org.jbpm</groupId>
<artifactId>jbpm-flow</artifactId>
</dependency>
<dependency>
<groupId>org.jbpm</groupId>
<artifactId>jbpm-flow-builder</artifactId>
</dependency>
<dependency>
<groupId>org.jbpm</groupId>
<artifactId>jbpm-bpmn2</artifactId>
</dependency>
<dependency>
<groupId>org.jbpm</groupId>
<artifactId>jbpm-persistence-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.jbpm</groupId>
<artifactId>jbpm-human-task-core</artifactId>
</dependency>
<dependency>
<groupId>org.jbpm</groupId>
<artifactId>jbpm-runtime-manager</artifactId>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>${slf4j.version}</version>
</dependency>
If your application uses persistence and transactions, you must add artifacts that implement the JTA and JPA frameworks. Additional dependencies are required for testing the workflow components before actual deployment.
The following example defines the dependencies that include Hibernate for JPA, the H2 database for persistence, Narayana for JTA, and the components needed for testing. This example uses the test
scope. Adjust this example as necessary for your application. For production use, remove the test
scope.
<!-- test dependencies -->
<dependency>
<groupId>org.jbpm</groupId>
<artifactId>jbpm-shared-services</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>${logback.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>${junit.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>${hibernate.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>${hibernate.core.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<version>${h2.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>jboss-transaction-api_1.2_spec</groupId>
<artifactId>org.jboss.spec.javax.transaction</artifactId>
<version>${jta.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jboss.narayana.jta</groupId>
<artifactId>narayana-jta</artifactId>
<version>${narayana.version}</version>
<scope>test</scope>
</dependency>
With this configuration you can embed the process engine in your application and use the KIE API to interact with processes, rules, and events.
Maven repositories
To use IBM product versions of Maven dependencies, you must configure the Maven repository in the top-level pom.xml
file.
Alternatively, download the bamoe-8.0.4-maven-repository.zip
product deliverable file from the IBM Support page of the Red Hat Customer Portal and make the contents of this file available as a local Maven repository.
Integration with CDI
The process engine supports integration with CDI automatically. You can use most of its API in the CDI framework without any modification.
The process engine also provides some dedicated modules that are designed specifically for CDI containers. The most important module is jbpm-services-cdi
, which provides CDI wrappers for process engine services. You can use these wrappers to integrate the process engine in CDI applications. The module provides the following set of services:
-
DeploymentService
-
ProcessService
-
UserTaskService
-
RuntimeDataService
-
DefinitionService
These services are available for injection in any other CDI bean.
Deployment service for CDI
The DeploymentService
service deploys and undeploys deployment units in the runtime environment. When you deploy a unit using this service, the deployment unit becomes ready for execution and a RuntimeManager
instance is created for it. You can also use the DeploymentService
to retrieve the following objects:
-
The
RuntimeManager
instance for a given deployment ID -
The
DeployedUnit
instance that represents the complete deployment unit for the given deployment ID -
The list of all deployed units known to the deployment service
By default, the deployment service does not save information about deployed units to any persistent storage. In the CDI framework, the component that uses the service can save and restore deployment unit information, for example, using a database, file, system, or repository.
The deployment service fires CDI events on deployment and undeployment. The component that uses the service can process these events to store deployments and remove them from the store when they are undeployed.
-
A
DeploymentEvent
with the@Deploy
qualifier is fired on deployment of a unit -
A
DeploymentEvent
with the@Undeploy
qualifier is fired on undeployment of a unit
You can use the CDI observer mechanism to get notification on these events.
The following example receives notification on deployment of a unit and can save the deployment:
public void saveDeployment(@Observes @Deploy DeploymentEvent event) {
// Store deployed unit information
DeployedUnit deployedUnit = event.getDeployedUnit();
}
The following example receives notification on deployment of a unit and can remove the deployment from storage:
public void removeDeployment(@Observes @Undeploy DeploymentEvent event) {
// Remove deployment with the ID event.getDeploymentId()
}
Several implementations of the DeploymentService
service are possible, so you must use qualifiers to instruct the CDI container to inject a particular implementation. A matching implementation of DeploymentUnit
must exist for every implementation of DeploymentService
.
The process engine provides the KmoduleDeploymentService
implementation. This implementation is designed to work with KmoduleDeploymentUnits
, which are small descriptors that are included in a KJAR file. This implementation is the typical solution for most use cases. The qualifier for this implementation is @Kjar
.
Form provider service for CDI
The FormProviderService
service provides access to form representations, which are usually displayed on the user interface for both process forms and user task forms.
The service relies on the concept of isolated form providers that can provide different capabilities and be backed by different technologies. The FormProvider
interface describes the contract for implementations of form providers.
FormProvider
interfacepublic interface FormProvider {
int getPriority();
String render(String name, ProcessDesc process, Map<String, Object> renderContext);
String render(String name, Task task, ProcessDesc process, Map<String, Object> renderContext);
}
Implementations of the FormProvider
interface must define a priority value. When the FormProviderService
service needs to render a form, it calls the available providers in their priority order.
The lower the priority value, the higher priority the provider gets. For example, a provider with a priority of 5 is evaluated before a provider with a priority of 10. For each required form, the service iterates over the available providers in the order of their priority, until one of them delivers the content. In the worst-case scenario, a simple text-based form is returned.
The process engine provides the following implementations of FormProvider
:
-
A provider that delivers forms created in the Form Modeller tool, with a priority of 2
-
A FreeMarker-based implementation that supports process and task forms, with a priority of 3
-
The default forms provider, returning a simple text-based form, used as a last resort if no other provider delivers any content, with a priority of 1000
Runtime data service for CDI
The RuntimeDataService
service provides access to data that is available at runtime, including the following data:
-
The available processes to be executed, with various filters
-
The active process instances, with various filters
-
The process instance history
-
The process instance variables
-
The active and completed nodes of process instance
The default implementation of RuntimeDataService
observes deployment events and indexes all deployed processes to expose them to the calling components.
Definition service for CDI
The DefinitionService
service provides access to process details that are stored as part of BPMN2 XML definitions.
Note
|
Before using any method that provides information, invoke the |
The BPMN2DataService
implementation provides access to the following data:
-
The overall description of the process for the given process definition
-
The collection of all user tasks found in the process definition
-
The information about the defined inputs for a user task node
-
The information about defined outputs for a user task node
-
The IDs of reusable processes (call activity) that are defined within a given process definition
-
The information about process variables that are defined within a given process definition
-
The information about all organizational entities (users and groups) that are included in the process definition. Depending on the particular process definition, the returned values for users and groups can contain the following information:
-
The actual user or group name
-
The process variable that is used to get the actual user or group name on runtime, for example,
#{manager}
-
CDI integration configuration
To use the jbpm-services-cdi
module in your CDI framework, you must provide some beans to satisfy the dependencies of the included service implementations.
Several beans can be required, depending on the usage scenario:
-
The entity manager and entity manager factory
-
The user group callback for human tasks
-
The identity provider to pass authenticated user information to the services
When running in a JEE environment, such as Red Hat JBoss EAP, the following producer bean satisfies all requirements of the jbpm-services-cdi
module.
jbpm-services-cdi
module in a JEE environmentpublic class EnvironmentProducer {
@PersistenceUnit(unitName = "org.jbpm.domain")
private EntityManagerFactory emf;
@Inject
@Selectable
private UserGroupInfoProducer userGroupInfoProducer;
@Inject
@Kjar
private DeploymentService deploymentService;
@Produces
public EntityManagerFactory getEntityManagerFactory() {
return this.emf;
}
@Produces
public org.kie.api.task.UserGroupCallback produceSelectedUserGroupCalback() {
return userGroupInfoProducer.produceCallback();
}
@Produces
public UserInfo produceUserInfo() {
return userGroupInfoProducer.produceUserInfo();
}
@Produces
@Named("Logs")
public TaskLifeCycleEventListener produceTaskAuditListener() {
return new JPATaskLifeCycleEventListener(true);
}
@Produces
public DeploymentService getDeploymentService() {
return this.deploymentService;
}
@Produces
public IdentityProvider produceIdentityProvider {
return new IdentityProvider() {
// implement IdentityProvider
};
}
}
The beans.xml
file for the application must enable a proper alternative for user group info callback. This alternative is taken based on the @Selectable
qualifier.
beans.xml
file`<beans xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee https://docs.jboss.org/cdi/beans_1_0.xsd">
<alternatives>
<class>org.jbpm.kie.services.cdi.producer.JAASUserGroupInfoProducer</class>
</alternatives>
</beans>
Note
|
|
Optionally, you can provide several other producers to deliver WorkItemHandlers
and Process
, Agenda
, WorkingMemory
event listeners. You can provide these components by implementing the following interfaces:
/**
* Enables providing custom implementations to deliver WorkItem name and WorkItemHandler instance pairs
* for the runtime.
* <br/>
* This interface is invoked by the RegisterableItemsFactory implementation (in particular InjectableRegisterableItemsFactory
* in the CDI framework) for every KieSession. Always return new instances of objects to avoid unexpected
* results.
*
*/
public interface WorkItemHandlerProducer {
/**
* Returns map of work items(key = work item name, value = work item handler instance)
* to be registered on KieSession
* <br/>
* The following parameters might be given:
* <ul>
* <li>ksession</li>
* <li>taskService</li>
* <li>runtimeManager</li>
* </ul>
*
* @param identifier - identifier of the owner - usually the RuntimeManager. This parameter allows the producer to filter out
* and provide valid instances for a given owner
* @param params - the owner might provide some parameters, usually KieSession, TaskService, RuntimeManager instances
* @return map of work item handler instances (always return new instances when this method is invoked)
*/
Map<String, WorkItemHandler> getWorkItemHandlers(String identifier, Map<String, Object> params);
}
/**
* Enables defining custom producers for known EventListeners. There might be several
* implementations that might provide a different listener instance based on the context in which they are executed.
* <br/>
* This interface is invoked by the RegisterableItemsFactory implementation (in particular, InjectableRegisterableItemsFactory
* in the CDI framework) for every KieSession. Always return new instances of objects to avoid unexpected results.
*
* @param <T> type of the event listener - ProcessEventListener, AgendaEventListener, WorkingMemoryEventListener
*/
public interface EventListenerProducer<T> {
/**
* Returns list of instances for given (T) type of listeners
* <br/>
* Parameters that might be given are:
* <ul>
* <li>ksession</li>
* <li>taskService</li>
* <li>runtimeManager</li>
* </ul>
* @param identifier - identifier of the owner - usually RuntimeManager. This parameter allows the producer to filter out
* and provide valid instances for given owner
* @param params - the owner might provide some parameters, usually KieSession, TaskService, RuntimeManager instances
* @return list of listener instances (always return new instances when this method is invoked)
*/
List<T> getEventListeners(String identifier, Map<String, Object> params);
}
The beans implementing these two interfaces are collected at runtime and invoked when the RuntimeManager
class builds a KieSession
instance.
Runtime manager as a CDI bean
You can inject the RuntimeManager
class as a CDI bean into any other CDI bean within your application. The RuntimeEnvironment
class must be properly produced to enable correct initialization of the RuntimeManager
instance.
The following CDI qualifiers reference the existing runtime manager strategies:
-
@Singleton
-
@PerRequest
-
@PerProcessInstance
For more information about the runtime manager, see Runtime manager.
Note
|
Though you can inject the |
To use the runtime manager, you must add the RuntimeEnvironment
class to the producer that is defined in the CDI integration configuration section.
RuntimeEnvironment
classpublic class EnvironmentProducer {
//Add the same producers as for services
@Produces
@Singleton
@PerRequest
@PerProcessInstance
public RuntimeEnvironment produceEnvironment(EntityManagerFactory emf) {
RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get()
.newDefaultBuilder()
.entityManagerFactory(emf)
.userGroupCallback(getUserGroupCallback())
.registerableItemsFactory(InjectableRegisterableItemsFactory.getFactory(beanManager, null))
.addAsset(ResourceFactory.newClassPathResource("BPMN2-ScriptTask.bpmn2"), ResourceType.BPMN2)
.addAsset(ResourceFactory.newClassPathResource("BPMN2-UserTask.bpmn2"), ResourceType.BPMN2)
.get();
return environment;
}
}
In this example, a single producer method is capable of providing the RuntimeEnvironment
class for all runtime manager strategies by specifying all qualifiers on the method level.
When the complete producer is available, the RuntimeManager
class can be injected into a CDI bean in the application:
RuntimeManager
classpublic class ProcessEngine {
@Inject
@Singleton
private RuntimeManager singletonManager;
public void startProcess() {
RuntimeEngine runtime = singletonManager.getRuntimeEngine(EmptyContext.get());
KieSession ksession = runtime.getKieSession();
ProcessInstance processInstance = ksession.startProcess("UserTask");
singletonManager.disposeRuntimeEngine(runtime);
}
}
If you inject the RuntimeManager
class, only one instance of RuntimeManager
might exist in the application. In typical cases, use the DeploymentService
service, which creates RuntimeManager
instances as necessary.
As an alternative to DeploymentService
, you can inject the RuntimeManagerFactory
class and then the application can use it to create RuntimeManager
instances. In this case, the EnvironmentProducer
definition is still required. The following example shows a simple ProcessEngine bean.
public class ProcessEngine {
@Inject
private RuntimeManagerFactory managerFactory;
@Inject
private EntityManagerFactory emf;
@Inject
private BeanManager beanManager;
public void startProcess() {
RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get()
.newDefaultBuilder()
.entityManagerFactory(emf)
.addAsset(ResourceFactory.newClassPathResource("BPMN2-ScriptTask.bpmn2"), ResourceType.BPMN2)
.addAsset(ResourceFactory.newClassPathResource("BPMN2-UserTask.bpmn2"), ResourceType.BPMN2)
.registerableItemsFactory(InjectableRegisterableItemsFactory.getFactory(beanManager, null))
.get();
RuntimeManager manager = managerFactory.newSingletonRuntimeManager(environment);
RuntimeEngine runtime = manager.getRuntimeEngine(EmptyContext.get());
KieSession ksession = runtime.getKieSession();
ProcessInstance processInstance = ksession.startProcess("UserTask");
manager.disposeRuntimeEngine(runtime);
manager.close();
}
}
Integration with Spring
While there are several ways to use the process engine with the Spring framework, two approaches are most frequently used
-
Direct use of the Runtime Manager API
-
Use of process engine services
Both approaches are tested and valid.
If your application needs to use only one runtime manager, use the direct Runtime Manager API, because it is the simplest way to use the process engine within a Spring application.
If your application needs to use multiple instances of the runtime manager, use process engine services, which encapsulate best practices by providing a dynamic runtime environment.
Direct use of the runtime manager API in Spring
The runtime manager manages the process engine and task service in synchronization. For more information about the runtime manager, see Runtime manager.
To set up the runtime manager in the Spring framework, use the following factory beans:
-
org.kie.spring.factorybeans.RuntimeEnvironmentFactoryBean
-
org.kie.spring.factorybeans.RuntimeManagerFactoryBean
-
org.kie.spring.factorybeans.TaskServiceFactoryBean
These factory beans provide a standard way to configure the spring.xml
file for your Spring application.
RuntimeEnvironmentFactoryBean
bean
The RuntimeEnvironmentFactoryBean
factory bean produces instances of RuntimeEnvironment
. These instances are required for creating RuntimeManager
instances.
The bean supports creating the following types of RuntimeEnvironment
instances with different default configurations:
-
DEFAULT
: The default, or most common, configuration for the runtime manager -
EMPTY
: A completely empty environment that you can configure manually -
DEFAULT_IN_MEMORY
: The same configuration as DEFAULT, but without persistence of the runtime engine -
DEFAULT_KJAR
: The same configuration as DEFAULT, but assets are loaded from KJAR artifacts, which are identified by the release ID or the GAV value -
DEFAULT_KJAR_CL
: The configuration is built from thekmodule.xml
descriptor in a KJAR artifact
Mandatory properties depend on the selected type. However, knowledge information must be present for all types. This requirement means that one of the following kinds of information must be provided:
-
knowledgeBase
-
assets
-
releaseId
-
groupId, artifactId, version
For the DEFAULT
, DEFAULT_KJAR
, and DEFAULT_KJAR_CL
types, you must also configure persistence by providing the following parameters:
-
Entity manager factory
-
Transaction manager
The transaction manager must be the Spring transaction manager, because persistence and transaction support is configured based on this transaction manager.
Optionally, you can provide an EntityManager
instance instead of creating a new instance from EntityManagerFactory
, for example, you might use a shared entity manager from Spring.
All other properties are optional. They can override defaults that are determined by the selected type of the runtime environment.
RuntimeManagerFactoryBean
bean
The RuntimeManagerFactoryBean
factory bean produces RuntimeManager
instances of a given type, based on the provided RuntimeEnvironment
instance.
The supported types correspond to runtime manager strategies:
-
SINGLETON
-
PER_REQUEST
-
PER_PROCESS_INSTANCE
The default type, when no type is specified, is SINGLETON
.
The identifier is a mandatory property, because every runtime manager must be uniquely identified. All instances created by this factory are cached, so they can be properly disposed using the destroy method (close()
).
TaskServiceFactoryBean
bean
The TaskServiceFactoryBean
factory bean produces an instance of TaskService
based on given properties. You must provide the following mandatory properties:
-
Entity manager factory
-
Transaction manager
The transaction manager must be the Spring transaction manager, because persistence and transaction support is configured based on this transaction manager.
Optionally, you can provide an EntityManager
instance instead of creating a new instance from EntityManagerFactory
, for example, you might use a shared entity manager from Spring.
You can also set additional optional properties for the task service instance:
-
userGroupCallback
: The implementation ofUserGroupCallback
that the task service must use, the default value isMVELUserGroupCallbackImpl
-
userInfo
: The implementation ofUserInfo
that the task service must use, the default value isDefaultUserInfo
-
listener
: A list ofTaskLifeCycleEventListener
listeners which must be notified upon various operations on tasks
This factory bean creates a single instance of the task service. By design, this instance must be shared across all beans in the Spring environment.
Configuring a sample runtime manager with a Spring application
The following procedure is an example of complete configuration for a single runtime manager within a Spring application.
-
Configure the entity manager factory and the transaction manager:
Configuring the entity manager factory and the transaction manager in thespring.xml
file<bean id="jbpmEMF" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="org.jbpm.persistence.spring.jta"/> </bean> <bean id="jbpmEM" class="org.springframework.orm.jpa.support.SharedEntityManagerBean"> <property name="entityManagerFactory" ref="jbpmEMF"/> </bean> <bean id="narayanaUserTransaction" factory-method="userTransaction" class="com.arjuna.ats.jta.UserTransaction" /> <bean id="narayanaTransactionManager" factory-method="transactionManager" class="com.arjuna.ats.jta.TransactionManager" /> <bean id="jbpmTxManager" class="org.springframework.transaction.jta.JtaTransactionManager"> <property name="transactionManager" ref="narayanaTransactionManager" /> <property name="userTransaction" ref="narayanaUserTransaction" /> </bean>
These settings define the following persistence configuration:
-
JTA transaction manager (backed by Narayana JTA - for unit tests or servlet containers)
-
Entity manager factory for the
org.jbpm.persistence.spring.jta
persistence unit
-
-
Configure the business process resource:
Configuring the business process resource in thespring.xml
file<bean id="process" factory-method="newClassPathResource" class="org.kie.internal.io.ResourceFactory"> <constructor-arg> <value>jbpm/processes/sample.bpmn</value> </constructor-arg> </bean>
These settings define a single process that is to be available for execution. The name of the resource is
sample.bpmn
and it must be available on the class path. You can use the class path as a simple way to include resources for trying out the process engine. -
Configure the
RuntimeEnvironment
instance with the entity manager, transaction manager, and resources:Configuring theRuntimeEnvironment
instance in thespring.xml
file<bean id="runtimeEnvironment" class="org.kie.spring.factorybeans.RuntimeEnvironmentFactoryBean"> <property name="type" value="DEFAULT"/> <property name="entityManagerFactory" ref="jbpmEMF"/> <property name="transactionManager" ref="jbpmTxManager"/> <property name="assets"> <map> <entry key-ref="process"><util:constant static-field="org.kie.api.io.ResourceType.BPMN2"/></entry> </map> </property> </bean>
These settings define a default runtime environment for the runtime manager.
-
Create a
RuntimeManager
instance based on the environment:<bean id="runtimeManager" class="org.kie.spring.factorybeans.RuntimeManagerFactoryBean" destroy-method="close"> <property name="identifier" value="spring-rm"/> <property name="runtimeEnvironment" ref="runtimeEnvironment"/> </bean>
After these steps you can use the runtime manager to execute processes in the Spring environment, using the EntityManagerFactory
class and the JTA transaction manager.
You can find complete Spring configuration files for different strategies in the repository.
Additional configuration options for the runtime manager in the Spring framework
In addition to the configuration with the EntityManagerFactory
class and the JTA transaction manager, as described in Configuring a sample runtime manager with a Spring application, you can use other configuration options for the runtime manager in the Spring framework:
-
JTA and the
SharedEntityManager
class -
Local Persistence Unit and the
EntityManagerFactory
class -
Local Persistence Unit and
SharedEntityManager
class
If your application is configured with a Local Persistence Unit and uses the AuditService
service to query process engine history data, you must add the org.kie.api.runtime.EnvironmentName.USE_LOCAL_TRANSACTIONS
environment entry to the RuntimeEnvironment
instance configuration:
RuntimeEnvironment
instance configuration for a Local Persistence Unit in the spring.xml
file<bean id="runtimeEnvironment" class="org.kie.spring.factorybeans.RuntimeEnvironmentFactoryBean">
...
<property name="environmentEntries" ref="env" />
</bean>
...
<util:map id="env" key-type="java.lang.String" value-type="java.lang.Object">
<entry>
<key>
<util:constant
static-field="org.kie.api.runtime.EnvironmentName.USE_LOCAL_TRANSACTIONS" />
</key>
<value>true</value>
</entry>
</util:map>
You can find more examples of configuration options in the repository: configuration files and test cases.
Process engine services with Spring
You might want to create a dynamic Spring application, where you can add and remove business assets such as process definitions, data models, rules, and forms without restarting the application.
In this case, use process engine services. Process engine services are designed as framework-agnostic, and separate modules bring in the required framework-specific addons.
The jbpm-kie-services
module contains the code logic of the services. A Spring application can consume these pure Java services.
The only code you must add to your Spring application to configure process engine services is the implementation of the IdentityProvider
interface. This implementation depends on your security configuration. The following example implementation uses Spring Security, though it might not cover all available security features for a Spring application.
IdentityProvider
interface using Spring Securityimport java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import org.kie.internal.identity.IdentityProvider;
import org.springframework.security.core.Authentication;
import org.springframework.security.core.GrantedAuthority;
import org.springframework.security.core.context.SecurityContextHolder;
public class SpringSecurityIdentityProvider implements IdentityProvider {
public String getName() {
Authentication auth = SecurityContextHolder.getContext().getAuthentication();
if (auth != null && auth.isAuthenticated()) {
return auth.getName();
}
return "system";
}
public List<String> getRoles() {
Authentication auth = SecurityContextHolder.getContext().getAuthentication();
if (auth != null && auth.isAuthenticated()) {
List<String> roles = new ArrayList<String>();
for (GrantedAuthority ga : auth.getAuthorities()) {
roles.add(ga.getAuthority());
}
return roles;
}
return Collections.emptyList();
}
public boolean hasRole(String role) {
return false;
}
}
Configuring process engine services with a Spring application
The following procedure is an example of complete configuration for process engine services within a Spring application.
-
Configure transactons:
Configuring transactions in thespring.xml
file<context:annotation-config /> <tx:annotation-driven /> <tx:jta-transaction-manager /> <bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager" />
-
Configure JPA and persistence:
Configuring JPA and persistence in thespring.xml
file<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" depends-on="transactionManager"> <property name="persistenceXmlLocation" value="classpath:/META-INF/jbpm-persistence.xml" /> </bean>
-
Configure security and user and group information providers:
Configuring security and user and group information providers in thespring.xml
file<util:properties id="roleProperties" location="classpath:/roles.properties" /> <bean id="userGroupCallback" class="org.jbpm.services.task.identity.JBossUserGroupCallbackImpl"> <constructor-arg name="userGroups" ref="roleProperties"></constructor-arg> </bean> <bean id="identityProvider" class="org.jbpm.spring.SpringSecurityIdentityProvider"/>
-
Configure the runtime manager factory. This factory is Spring context aware, so it can interact with the Spring container in the correct way and support the necessary services, including the transactional command service and the task service:
Configuring the runtime manager factory in thespring.xml
file<bean id="runtimeManagerFactory" class="org.kie.spring.manager.SpringRuntimeManagerFactoryImpl"> <property name="transactionManager" ref="transactionManager"/> <property name="userGroupCallback" ref="userGroupCallback"/> </bean> <bean id="transactionCmdService" class="org.jbpm.shared.services.impl.TransactionalCommandService"> <constructor-arg name="emf" ref="entityManagerFactory"></constructor-arg> </bean> <bean id="taskService" class="org.kie.spring.factorybeans.TaskServiceFactoryBean" destroy-method="close"> <property name="entityManagerFactory" ref="entityManagerFactory"/> <property name="transactionManager" ref="transactionManager"/> <property name="userGroupCallback" ref="userGroupCallback"/> <property name="listeners"> <list> <bean class="org.jbpm.services.task.audit.JPATaskLifeCycleEventListener"> <constructor-arg value="true"/> </bean> </list> </property> </bean>
-
Configure process engine services as Spring beans:
Configuring process engine services as Spring beans in thespring.xml
file<!-- Definition service --> <bean id="definitionService" class="org.jbpm.kie.services.impl.bpmn2.BPMN2DataServiceImpl"/> <!-- Runtime data service --> <bean id="runtimeDataService" class="org.jbpm.kie.services.impl.RuntimeDataServiceImpl"> <property name="commandService" ref="transactionCmdService"/> <property name="identityProvider" ref="identityProvider"/> <property name="taskService" ref="taskService"/> </bean> <!-- Deployment service --> <bean id="deploymentService" class="org.jbpm.kie.services.impl.KModuleDeploymentService" depends-on="entityManagerFactory" init-method="onInit"> <property name="bpmn2Service" ref="definitionService"/> <property name="emf" ref="entityManagerFactory"/> <property name="managerFactory" ref="runtimeManagerFactory"/> <property name="identityProvider" ref="identityProvider"/> <property name="runtimeDataService" ref="runtimeDataService"/> </bean> <!-- Process service --> <bean id="processService" class="org.jbpm.kie.services.impl.ProcessServiceImpl" depends-on="deploymentService"> <property name="dataService" ref="runtimeDataService"/> <property name="deploymentService" ref="deploymentService"/> </bean> <!-- User task service --> <bean id="userTaskService" class="org.jbpm.kie.services.impl.UserTaskServiceImpl" depends-on="deploymentService"> <property name="dataService" ref="runtimeDataService"/> <property name="deploymentService" ref="deploymentService"/> </bean> <!-- Register the runtime data service as a listener on the deployment service so it can receive notification about deployed and undeployed units --> <bean id="data" class="org.springframework.beans.factory.config.MethodInvokingFactoryBean" depends-on="deploymentService"> <property name="targetObject" ref="deploymentService"></property> <property name="targetMethod"><value>addListener</value></property> <property name="arguments"> <list> <ref bean="runtimeDataService"/> </list> </property> </bean>
Your Spring application can use process engine services.
Integration with EJB
The process engine provides a complete integration layer for Enterprise Java Beans (EJB). This layer supports both local and remote EJB interaction.
The following modules provide EJB services:
-
jbpm-services-ejb-api
: The API module that extends thejbpm-services-api
module with EJB-specific interfaces and objects -
jbpm-services-ejb-impl
: An EJB extension for core services -
jbpm-services-ejb-timer
: A process engine Scheduler Service implementation based on the EJB Timer Service -
jbpm-services-ejb-client
: An EJB remote client implementation for remote interaction, which supports Red Hat JBoss EAP by default
The EJB layer is based on process engine services. It provides almost the same capabilities as the core module, though some limitations exist if you use the remote interface.
The main limitation affects the deployment service, which, if it is used as a remote EJB service, supports only the following methods:
-
deploy()
-
undeploy()
-
activate()
-
deactivate()
-
isDeployed()
Other methods are excluded because they return instances of runtime objects, such as RuntimeManager
, which can not be used over the remote interface.
All other services provide the same functionality over EJB as the versions included in the core module.
Implementations for EJB services
As an extension of process engine core services, EJB services provide EJB-based execution semantics and are based on various EJB-specific features.
-
DeploymentServiceEJBImpl
is implemented as an EJB singleton with container-managed concurrency. Its lock type is set towrite
. -
DefinitionServiceEJBImpl
is implemented as an EJB singleton with container-managed concurrency. Its overall lock type is set toread
and for thebuildProcessDefinition()
method the lock type is set towrite
. -
ProcessServiceEJBImpl
is implemented as a stateless session bean. -
RuntimeDataServiceEJBImpl
is implemented as an EJB singleton. For the majority of methods the lock type is set toread
. For the following methods the lock type is set towrite
:-
onDeploy()
-
onUnDeploy()
-
onActivate()
-
onDeactivate()
-
-
UserTaskServiceEJBImpl
is implemented as a stateless session bean.
Transactions
The EJB container manages transactions in EJB services. For this reason, you do not need to set up any transaction manager or user transaction within your application code.
Identity provider
The default identity provider is based on the EJBContext
interface and relies on caller principal information for both name and roles. The IdentityProvider
interface provides two methods related to roles:
-
getRoles()
returns an empty list, because theEJBContext
interface does not provide an option to fetch all roles for a particular user -
hasRole()
delegates to theisCallerInRole()
method of the context
To ensure that valid information is available to the EJB environment, you must follow standard JEE security practices to authenticate and authorize users. If no authentication or authorization is configured for EJB services, an anonymous user is always assumed.
If you use a different security model, you can use CDI-style injection for the IdentityProvider
object for EJB services. In this case, create a valid CDI bean that implements the org.kie.internal.identity.IdentityProvider
interface and make this bean available for injection with your application. This implementation will take precedence over the EJBContext
-based identity provider.
Deployment synchronization
Deployment synchronization is enabled by default and attempts to synchronize any deployments every 3 seconds. It is implemented as an EJB singleton with container-managed concurrency. Its lock type is set to write
. It uses the EJB timer service to schedule synchronization jobs.
EJB scheduler service
The process engine uses the scheduler service to handle time-based activities such as timer events and deadlines. When running in an EJB environment, the process engine uses a scheduler based on the EJB timer service. It registers this scheduler for all RuntimeManager
instances.
You might need to use a configuration specific to an application server to support cluster operation.
UserGroupCallback
and UserInfo
implementation selection
The required implementation of UserGroupCallback
and UserInfo
interfaces might differ for various applications. These interfaces can not be injected with EJB directly. You can use the following system properties to select existing implementations or use custom implementations of these interfaces for the process engine:
-
org.jbpm.ht.callback
: This property selects the implementation for theUserGroupCallback
interface:-
mvel
: The default implementation, typically used for testing. -
ldap
: The LDAP-based implementation. This implementation requires additional configuration in thejbpm.usergroup.callback.properties
file. -
db
: The database-based implementation. This implementation requires additional configuration in thejbpm.usergroup.callback.properties
file. -
jaas
: An implementation that requests user information from the container. -
props
: A simple property-based callback. This implementation requires an additional properties file that contains all users and groups. -
custom
: A custom implementation. You must provide the fully-qualified class name of the implementation in theorg.jbpm.ht.custom.callback
system property.
-
-
org.jbpm.ht.userinfo
: This property selects the implementation for theUserInfo
interface:-
ldap
: The LDAP-based implementation. This implementation requires additional configuration in thejbpm-user.info.properties
file. -
db
: The database-based implementation. This implementation requires additional configuration in thejbpm-user.info.properties
file. -
props
: A simple property-based implementation. This implementation requires an additional properties file that contains all user information. -
custom
: A custom implementation. You must provide the fully-qualified class name of the implementation in theorg.jbpm.ht.custom.userinfo
system property.
-
Typically, set the system properties in the startup configuration of the application server or JVM. You can also set the properties in the code before using the services. For example, you can provide a custom @Startup
bean that configures these system properties.
Local EJB interfaces
The following local EJB service interfaces extend core services:
-
org.jbpm.services.ejb.api.DefinitionServiceEJBLocal
-
org.jbpm.services.ejb.api.DeploymentServiceEJBLocal
-
org.jbpm.services.ejb.api.ProcessServiceEJBLocal
-
org.jbpm.services.ejb.api.RuntimeDataServiceEJBLocal
-
org.jbpm.services.ejb.api.UserTaskServiceEJBLocal
You must use these interfaces as injection points and annotate them with @EJB
:
@EJB
private DefinitionServiceEJBLocal bpmn2Service;
@EJB
private DeploymentServiceEJBLocal deploymentService;
@EJB
private ProcessServiceEJBLocal processService;
@EJB
private RuntimeDataServiceEJBLocal runtimeDataService;
After injecting these interfaces, invoke operations on them in the same way as on core modules. No restrictions exist for using local interfaces.
Remote EJB interfaces
The following dedicated remote EJB interfaces extend core services:
-
org.jbpm.services.ejb.api.DefinitionServiceEJBRemote
-
org.jbpm.services.ejb.api.DeploymentServiceEJBRemote
-
org.jbpm.services.ejb.api.ProcessServiceEJBRemote
-
org.jbpm.services.ejb.api.RuntimeDataServiceEJBRemote
-
org.jbpm.services.ejb.api.UserTaskServiceEJBRemote
You can use these interfaces in the same way as local interfaces, with the exception of handling custom types.
You can define custom types in two ways. Globally defined types are available on application classpath and included in the enterprise application. If you define a type locally to the deployment unit, the type is declared in a project dependency (for example, in a KJAR file) and is resolved at deployment time.
Globally available types do not require any special handling. The EJB container automatically marshalls the data when handling remote requests. However, local custom types are not visible to the EJB container by default.
The process engine EJB services provide a mechanism to work with custom types. They provide the following two additional types:
-
org.jbpm.services.ejb.remote.api.RemoteObject
: A serializable wrapper class for single-value parameters -
org.jbpm.services.ejb.remote.api.RemoteMap
: A dedicatedjava.util.Map
implementation to simplify remote invocation of service methods that accept custom object input. The internal implementation of the map holds content that is already serialized, in order to avoid additional serialization at sending time.This implementation does not include some of the methods of
java.util.Map
that are usually not used when sending data.
These special objects perform eager serialization to bytes using an ObjectInputStream
object. They remove the need for serialization of data in the EJB client/container. Because no serialization is needed, it is not necessary to share the custom data model with the EJB container.
The following example code works with local types and remote EJB services:
// Start a process with custom types via remote EJB
Map<String, Object> parameters = new RemoteMap();
Person person = new org.jbpm.test.Person("john", 25, true);
parameters.put("person", person);
Long processInstanceId = processService.startProcess(deploymentUnit.getIdentifier(), "custom-data-project.work-on-custom-data", parameters);
// Fetch task data and complete a task with custom types via remote EJB
Map<String, Object> data = userTaskService.getTaskInputContentByTaskId(taskId);
Person fromTaskPerson = data.get("_person");
fromTaskPerson.setName("John Doe");
RemoteMap outcome = new RemoteMap();
outcome.put("person_", fromTaskPerson);
userTaskService.complete(taskId, "john", outcome);
In a similar way, you can use the RemoteObject
class to send an event to a process instance:
// Send an event with a custom type via remote EJB
Person person = new org.jbpm.test.Person("john", 25, true);
RemoteObject myObject = new RemoteObject(person);
processService.signalProcessInstance(processInstanceId, "MySignal", myObject);
Remote EJB client
Remote client support is provided by implementation of the ClientServiceFactory
interface that is a facade for application server specific code:
ClientServiceFactory
interface/**
* Generic service factory used for remote lookups that are usually container specific.
*
*/
public interface ClientServiceFactory {
/**
* Returns unique name of given factory implementation
* @return
*/
String getName();
/**
* Returns remote view of given service interface from selected application
* @param application application identifier on the container
* @param serviceInterface remote service interface to be found
* @return
* @throws NamingException
*/
<T> T getService(String application, Class<T> serviceInterface) throws NamingException;
}
You can dynamically register implementations using the ServiceLoader
mechanism. By default, only one implementation is available in Red Hat JBoss EAP.
Each ClientServiceFactory
implementation must provide a name. This name is used to register it within the client registry. You can look up implementations by name.
The following code gets the default Red Hat JBoss EAP remote client:
// Retrieve a valid client service factory
ClientServiceFactory factory = ServiceFactoryProvider.getProvider("JBoss");
// Set the application variable to the module name
String application = "sample-war-ejb-app";
// Retrieve the required service from the factory
DeploymentServiceEJBRemote deploymentService = factory.getService(application, DeploymentServiceEJBRemote.class);
After retrieving a service you can use its methods.
When working with Red Hat JBoss EAP and the remote client you can add the following Maven dependency to bring in all EJB client libraries:
<dependency>
<groupId>org.jboss.as</groupId>
<artifactId>jboss-as-ejb-client-bom</artifactId>
<version>7.4.1.Final</version> <!-- use the valid version for the server you run on -->
<optional>true</optional>
<type>pom</type>
</dependency>
Integration with OSGi
All core process engine JAR files and core dependencies are OSGi-enabled. The following additional process engine JAR files are also OSGI-enabled:
-
jbpm-flow
-
jbpm-flow-builder
-
jbpm-bpmn2
OSGi-enabled JAR files contain MANIFEST.MF
files in the META-INF
directory. These files contain data such as the required dependencies. You can add such JAR files to an OSGi environment.
For additional information about the OSGi infrastructure, see the OSGI documentation.
Note
|
Support for integration with the OSGi framework is deprecated. It does not receive any new enhancements or features and will be removed in a future release. |