High availability
On large workload environments, it is possible to deploy IBM Process Mining in a multi-node configuration thereby allowing high availability.
Requirements
For high availability, you must configure the following:
- Load balancer
- Minimum of two application servers
- Data server
Load balancer
Load balancer helps manage the incoming data traffic on the IBM Process Mining server. Before you configure the load balancer, you must ensure the following prerequisites:
- Install TLS security and SSL certificates
- Configure
round-robin
forwarding - Define the target application server using the port numbers
<IP address>:8080
or<IP address>:443
if the application server requiresHTTPS
configuration
You can adopt a load balancer with the Health check
feature. To configure the Health check
feature, you need to refer the path
, /analytics/healthchecks/ping
.
Application server
In the application server, you must install IBM Process Mining using the standard installation procedure. For more information. see the Basic setup topic.
You can setup NGINX
to expose https.
When you configure the application server, ensure that a node adheres to the following rules:
-
MongoDB is installed outside the application server. For more information on MongoDB installation, see the Installing MongoDB topic.
-
You must include the following roles to MongoDB:
dbAdmin
,readWrite
,read
. -
You can use the following script to create a user in MongoDB:
db.createUser( { user: "processminingusr", pwd: "hwbfeY63fF_2", roles: [ { role: "dbAdmin", db: "processmining" }, { role: "readWrite", db: "processmining" }, { role: "read", db: "processmining" } ] } )
- When creating roles in MongoDB, the customer can choose different configurations. For more information, see MongoDB.
Data server
Although this component is not required, it is an optional for dedicated installations. A data server includes the following:
- NFS server for exposing a shared network folder
- MongoDB
- MonetDB
- Redis
Configuring Custom Process Apps to bypass High availability
Custom Process Apps does not support High availability in IBM Process Mining. To use Custom Process Apps in this environment, you must configure Process Apps to work only in one of the nodes and disable it in all other nodes. In this configuration, all processes that are related to Process Apps are managed by the selected node, bypassing the default High availability settings.
To configure the Custom Process Apps to bypass High availability in IBM Process Mining, complete the following steps:
-
On the selected node, open the
${PM_HOME}/etc/accelerator-core.properties
file and set Process Apps to access Process Mining via the Load Balancer using the fully qualified domain name (FQDN). Then, deploy the node as normal.#process mining url please update to the process mining url you want to point at #pm.host=${PM_HOST} pm.host=https://<LOAD_BALANCER_HOSTNAME>
-
You must disable Process Apps on all other nodes, and Process Mining must point to the selected single node.
Apply the following configuration to all nodes:
-
Copy the Public key you created on the Process Apps node (
${PM_HOME}/etc/acf-ext-publicKey.der
) to the other nodes. -
In the
${PM_HOME}/etc/processmining.conf
file, configure Process Mining to point to the selected Process Apps node.
accelerator: { host: "http://localhost:8080/pm-accelerator", publicKeyPath: "/opt/processmining/etc/acf-ext-publicKey.der" }
-
-
Change the host to the selected Process Apps node.
For example:
host: “https://<PROCESS_APPS_NODE_FQDN>/pm-accelerator",
-
To disable Process Apps on all other nodes, remove the following
.war
file from the webapps folder:mv ${PM_HOME}/jetty-web/webapps/accelerator-core-1.14.3.war ${BACKUP_LOCATION}/ mv ${PM_HOME}/jetty-web/webapps/context.xml ${BACKUP_LOCATION}/
When you launch IBM Process Mining, ensure that Process Apps is not launched on the disabled nodes. The following script must not be executed on disabled nodes:
${PM_HOME}/bin/pm-accelerators.sh
Horizontal scaling
Scaling works at tenant level. You can associate each tenant to a MonetDB instance.
To enable the configuration, a new profile engine_monetdb_partitions
should be added to the profiles in the processmining.conf
file:
profiles: [
"engine_monetdb"
],
- Redis is installed outside the application server.
For more information on Redis installation, see Installing Redis. - A shared network directory is mounted on the node that is with NFS protocol.
Perform the following changes on the processmining.conf
file:
- file system home path
- MongoDB connection
- MonetDB connection
- redis connection
When you introduce changes to the processmining.conf
file, make sure you do it correctly. For more information, see the Configuration file editing guidelines section.
For more information, see the Database topic.
Example:
###########################################
# system config section
###########################################
filesystem.home: "/nfs/cluster1/",
###########################################
# database
###########################################
# MongoDB
###########################################
persistence: {
mongodb: {
database: "processmining",
host: "172.32.17.74",
port: 27017,
user: "processmining",
password: "",
ssl: {
enabled: false,
trustStore: "",
trustStorePassword: "",
keyStore: "",
keyStorePassword: ""
}
},
########################################
# MonetDB
########################################
persistence: {
...
partitioning: {
datasources: { # datasources are lists of instances
ds_1: {
database: "mydb",
host: "127.0.0.1",
port: 50000,
user: "monetdb",
password: ""
},
ds_2: {
database: "mydb",
host: "127.0.0.1",
port: 50001,
user: "monetdb",
password: ""
}
},
tenant_lookup: { # tenant_lookup is the association between instances and tenants
ds_1: ["-1"],
ds_2: ["1"]
}
}
}
# valid only when redis_cache profile is active
redisCache: {
database: 0,
host: "172.32.17.74",
port: 6379,
password: "",
ssl: {
enabled: false,
peerVerification: false,
trustStore: "",
trustStorePassword: "",
keyStore: "",
keyStorePassword: ""
}
}
},