Installing a CP4BA Workflow Process Service Runtime production deployment

Workflow Process Service is a small-footprint business automation environment for testing and running workflow processes that coordinate manual tasks and services. You can install Workflow Process Service Runtime or Workflow Process Service Authoring on Red Hat® OpenShift® Container Platform. The steps include the process to prepare, deploy, and configure Workflow Process Service.

To install Workflow Process Service Authoring, you have two options:

If you run into issues while installing Workflow Process Service Runtime, see Troubleshooting Workflow Process Service.

If you are upgrading, see the following links, depending on the version that you are upgrading from:

Preparing for a Workflow Process Service Runtime deployment

Workflow Process Service Runtime requires an IBM Cloud Pak for Business Automation installation, and integrates with components in Cloud Pak for Business Automation.

  1. Make sure that you have the resources you need for your deployment. See Planning for Workflow Process Service.
  2. Plan and prepare your deployment on your cluster by completing the steps in Preparing for a production deployment.

Deploying required Workflow Process Service Runtime components

To install Workflow Process Service Runtime, you must use the Cloud Pak for Business Automation operator to configure Resource Registry and root Certificate Authority (CA).

If you already installed one of the Cloud Pak for Business Automation deployment patterns, you can proceed directly to step 2. For instructions to install a deployment pattern, see Creating a production deployment.

  1. If you didn't install a deployment pattern, you must customize the Cloud Pak for Business Automation custom resource (CR) to configure the required components.
    1. If your cluster does not support dynamic provisioning, create your persistent volume (PV) manually for your IBM Resource Registry component by completing the steps in Optional: Implementing storage.

      If your cluster has dynamic storage provisioning that is enabled, create the following .yaml file, and fill in the values of sc_slow_file_storage_classname, sc_medium_file_storage_classname, and sc_fast_file_storage_classname.
      apiVersion: icp4a.ibm.com/v1
      kind: ICP4ACluster
      metadata:
         name: icp4adeploy
         labels:
           app.kubernetes.io/instance: ibm-dba
           app.kubernetes.io/managed-by: ibm-dba
           app.kubernetes.io/name: ibm-dba
           release: 24.0.0
      spec:
         appVersion: 24.0.0
         ibm_license: "accept"
         shared_configuration:
           ## Use this parameter to specify the license for the IBM Cloud Pak for Business Automation deployment for the rest of the Cloud Pak for Business Automation components.
           ## This value could differ from the rest of the licenses.
           sc_deployment_license: production
           sc_deployment_type: production
           ## On OCP 3.x and 4.x, the user script will populate these three (3) parameters based on your input for "production" deployment.
           ## If you manually deploying without using the user script, then you need to provide the different storage classes for the slow, medium
           ## and fast storage parameters below.  If you only have 1 storage class defined, then you can use that storage class for all 3 parameters.
           ## sc_block_storage_classname is for Zen, Zen requires/recommends block storage (RWO) for metastoreDB
           storage_configuration:
             sc_slow_file_storage_classname: "<Required>"
             sc_medium_file_storage_classname: "<Required>"
             sc_fast_file_storage_classname: "<Required>"
             sc_block_storage_classname: "<Required>"
           sc_deployment_platform: OCP
         ## this field is required to deploy Resource Registry (RR)
         resource_registry_configuration:
           replica_size: 1
    2. If you want to configure one or more LDAP configurations, use the ldap_configuration parameter in the icp4acluster CR. For more information about LDAP configuration, see LDAP configuration. You can also configure LDAP in post-deployment by using the Common UI console.
  2. Wait a few minutes, then run the command oc get icp4acluster -o yaml to make sure that the root certificate authority and Resource Registry are ready. Make sure that .status.components.prereq.rootCAStatus is Ready and .status.components.prereq.rootCASecretName is filled with the correct secret name. Make sure that .status.endpoints["Resource Registry"] appears in the endpoints list. For example:
    status:
        components:
          ...
          prereq:
            conditions: []
            rootCASecretName: icp4adeploy-root-ca
            rootCAStatus: Ready
          resource-registry:
            rrAdminSecret: icp4adeploy-rr-admin-secret
            rrCluster: Ready
            rrService: Ready
          ...
        endpoints:
        - name: Resource Registry
          scope: Internal
          type: gRPC
          uri: icp4adeploy-dba-rr-client:2379
  3. Make sure that Zen and Resource Registry pods are listed in the oc get pod command result. For example:
    [root@xxxxxx]# oc get pod
    NAME                                                            READY   STATUS      RESTARTS   AGE 
    common-web-ui-5575b7dd-2jcgk                                    1/1     Running     0          9h 
    create-postgres-license-config-fhtxp                            0/1     Completed   0          9h 
    create-postgres-license-config-z69mr                            0/1     Completed   0          9h 
    create-secrets-job-5t56h                                        0/1     Completed   0          9h 
    iam-config-job-gllrj                                            0/1     Completed   0          9h 
    ibm-ads-operator-c9d7dbbb8-chvfq                                1/1     Running     0          10h 
    ibm-common-service-operator-85db7d47b8-rqplb                    1/1     Running     0          10h 
    ibm-commonui-operator-746fb6bc7c-tgnm2                          1/1     Running     0          9h 
    ibm-content-operator-86577cdfb9-lq2lw                           1/1     Running     0          10h 
    ibm-cp4a-operator-57d9c5bcdf-c96vm                              1/1     Running     0          10h 
    ibm-cp4a-wfps-operator-69c9666445-kwhgp                         1/1     Running     0          10h 
    ibm-dpe-operator-5894844bbf-4rkp5                               1/1     Running     0          10h 
    ibm-elastic-operator-controller-manager-668484b677-njcks        1/1     Running     0          9h 
    ibm-iam-operator-7d7c98dd8d-btvkq                               1/1     Running     0          9h 
    ibm-insights-engine-operator-6c48688b48-pxw7n                   1/1     Running     0          10h 
    ibm-mongodb-operator-79c79698f8-5ghzt                           1/1     Running     0          9h 
    ibm-nginx-8fbb99c8f-ffzcx                                       2/2     Running     0          9h 
    ibm-nginx-tester-d566fd65d-cnwlh                                2/2     Running     0          9h 
    ibm-odm-operator-7ffb8ffd76-vj6l9                               1/1     Running     0          10h 
    ibm-pfs-operator-779cc65dc8-8ct4q                               1/1     Running     0          10h 
    ibm-zen-operator-6f9f8b64b4-49zwv                               1/1     Running     0          9h 
    icp-mongodb-0                                                   2/2     Running     0          9h 
    icp4a-foundation-operator-6cd6f97b97-zrmcm                      1/1     Running     0          9h 
    icp4adeploy-dba-rr-f33844834a                                   1/1     Running     0          9h 
    icp4adeploy-rr-backup-28040140-9v8n4                            0/1     Completed   0          38s 
    icp4adeploy-rr-setup-pod                                        0/1     Completed   0          9h 
    meta-api-deploy-7784968547-f7cn5                                1/1     Running     0          9h 
    oidc-client-registration-s4fsm                                  0/1     Completed   0          9h 
    operand-deployment-lifecycle-manager-594cc759cf-ncg9n           1/1     Running     0          9h 
    platform-auth-service-688dbc5fd9-vbsc9                          1/1     Running     0          9h 
    platform-identity-management-746f7958bc-5mjbj                   1/1     Running     0          9h 
    platform-identity-provider-95486f9b7-lmgtj                      1/1     Running     0          9h 
    postgresql-operator-controller-manager-1-18-2-5c6f77597-ghb5k   1/1     Running     0          9h 
    pre-zen-operand-config-job-8s7mt                                0/1     Completed   0          9h 
    pre-zen-operand-config-job-wmtwn                                0/1     Completed   0          9h 
    setup-job-b5pfx                                                 0/1     Completed   0          9h 
    usermgmt-577467b94b-2dt9d                                       1/1     Running     0          9h 
    usermgmt-ensure-tables-job-zk5hp                                0/1     Completed   0          9h 
    zen-audit-7f75954694-rzrsx                                      1/1     Running     0          9h 
    zen-core-api-84dbcfbb96-54gxl                                   2/2     Running     0          9h 
    zen-core-bd6b5bbd5-bntmq                                        2/2     Running     0          9h 
    zen-core-create-tables-job-vcbkj                                0/1     Completed   0          9h 
    zen-metastore-backup-cron-job-28039680-gxsd5                    0/1     Completed   0          7h40m 
    zen-metastore-edb-1                                             1/1     Running     0          9h 
    zen-minio-0                                                     1/1     Running     0          9h 
    zen-minio-create-buckets-job-qr8j9                              0/1     Completed   0          9h 
    zen-pre-requisite-job-gr2h8                                     0/2     Completed   0          9h 
    zen-watcher-5b9b68d4fb-cbgl7                                    2/2     Running     0          9h 

Deploying Workflow Process Service Runtime

After configuring IBM Cloud Pak for Business Automation components, you can deploy Workflow Process Service Runtime.
  1. If you want to use an embedded PostgreSQL server, you can proceed directly to step 2. If you have an external PostgreSQL server, complete the following steps:
    1. Prepare your PostgreSQL database. To make sure the PostgreSQL database is configured correctly for your workload, update the following parameters in the postgresql.conf file of the database server:
      Parameter Setting Description
      shared_buffers Minimum 1024 MB PostgreSQL performance tuning recommends using 25% of the memory for the shared buffer. Updates to the Linux kernel configuration might also be required. For more information, see the PostgreSQL tuning guides.
      work_mem Minimum 20 MB This parameter applies to each session and many user sessions can cause large memory usage. This memory is critical because it is used for sort operations. The running time can increase significantly if the value is set too low. For example, the running time might be over an hour for toolkit deployments.
      max_prepared_transactions For example, 200 This value should be at least as large as the max_connections setting.
      max_wal_size For example, 6 GB For larger workloads, the default value must be increased. To check whether an increase is required, see the PostgreSQL server log files.
      log_min_duration_statement For example, 5000 This optional parameter is for logging statements that exceed the specified running time to identify bottlenecks and potential tuning areas. This value is measured in milliseconds. For example, a value of 5000 corresponds to 5 seconds.
    2. Update the values for database.external.databaseName, database.external.dbCredentialSecret, and database.external.dbServerCertSecret. For more information about other database parameters, see Workflow Process Service parameters.
    3. Create a database in your external PostgreSQL server and create a user secret, where username corresponds to the database username, and password corresponds to the database password. If you want to enable certificate-based authentication, you do not need a password for wfps-db-secret. For example, your file might look similar to:
      apiVersion: v1
      kind: Secret
      metadata:
        name: wfps-db-secret
      type: Opaque
      stringData:
        username: "wfpsadmin"
        password: "password"
    4. By default, SSL communication is enabled. If you want to disable SSL, change the value of database.external.enableSSL to false.
      If you want to enable SSL, create a CA certificate secret with the ca.crt key, by using the ca.crt file that is exported from your PostgreSQL server. For the secret name, enter the value of database.external.dbServerCertSecret. For example, if you are enabling SSL by itself, the command might look similar to:
      kubectl create secret generic wfps-db-cacert-secret --from-file=ca.crt=./ca_crt.pem
      Your database configuration might look similar to:
      spec:   
      database:
          external:
            type: postgresql
            enableSSL: true
            dbServerCertSecret: wfps-db-cacert-secret
      Optionally, if you want to enable both SSL and database certificate-based authentication, create the secret with client.crt and client.key. Set the value of spec.database.external.sslMode to verify-ca or verify-full. To create your secret, run a command similar to:
      kubectl create secret generic wfps-db-cacert-secret --from-file=ca.crt=./ca_crt.pem --from-file=tls.crt=./client.crt --from-file=tls.key=./client.key
    5. Optional: If you want to use custom Java™ database connectivity (JDBC) files inside the Workflow Process Service Runtime server, set the database.customJDBCPVC parameter. The persistent volume claim (PVC) needs to be in ROX (ReadOnlyMany) or RWX (ReadWriteMany) access mode, otherwise high availability disaster recovery (HADR) will be affected, since all pods must be allocated to the same node and the PVC is mounted at the /shared/resources/jdbc/postgresql directory inside the container. The jdbc/postgresql directory must be created inside customJDBCPVC. For example, the structure of the remote file system might look like:
      jdbc
      ├── postgresql
            ├── postgresql-42.2.15.jar
  2. Prepare storage for the Workflow Process Service Runtime server. The Workflow Process Service Runtime server supports persistence on three types of files: data files, log files and dump files. By default, data file persistence is enabled, while log and dump file persistence are disabled.
    • Option 1: If your environment supports dynamic provisioning, you can enable or disable each type of file persistence and configure each storage class. For example:
      spec:
        persistent:
          data:
            enable: true
            size: 1Gi
            storageClassName: data-storage-class
          dump:
            enable: false
            size: 1Gi
            storageClassName: dump-storage-class
          logs:
            enable: false
            size: 1Gi
            storageClassName: logs-storage-class
    • Option 2: If your environment does not support dynamic provisioning and you do not have available storage classes on your cluster, choose this option. You can manually create your PV with spec.storageClasssName configured so that it can be used as storageClasssName in the Workflow Process Service Runtime custom resource. If you want to scale up to 2 instances of the Workflow Process Service Runtime server, you need to create another set of PVs and associate them with corresponding storage class.
  3. Use the shared_configuration.sc_egress_configuration.sc_restricted_internet_access parameter in the Cloud Pak for Business Automation custom resource to control internet access. The default value is true, which prevents the Workflow Process Service pod from accessing any external systems other than known targets such as databases. Evaluate and change your setting, if necessary. See Configuring cluster security.
  4. Create a custom resource YAML file for your Workflow Process Service Runtime configuration. For more information about parameters, see Workflow Process Service parameters. After you complete the following steps, your custom resource might look similar to the following:
    apiVersion: icp4a.ibm.com/v1
    kind: WfPSRuntime
    metadata:
      name: wfps-instance1
    spec:
      appVersion: "24.0.0"
      deploymentLicense: production    
      license:
        accept: true
    1. Optional: If you are using LDAP, it is recommended to update the value of spec.admin.username with an LDAP user.
      By default, the operator sets spec.admin.username to be the Common Services admin user from the platform-auth-idp-credential secret in the Common Services namespace.
      • If you are using the shared Common Services, the namespace is ibm-common-services.
      • If you are using a dedicated Common Services, you can find the namespace from the common-service-maps ConfigMap from the kube-public namespace. For more information about the common-service-maps ConfigMap, see step 2 in Setting up the cluster in the OpenShift console.

      You can configure LDAP in Identity Access Management and then set the LDAP user to be spec.admin.username. The Workflow Process Service Runtime operator automatically configures the LDAP user as a Zen user. To configure LDAP, see step 1 of Completing post-deployment tasks for Workflow Process Service Runtime.

    2. If you want to let the Workflow Process Service Runtime operator provision an embedded PostgreSQL instance, you must make sure that your OpenShift Container Platform cluster already has a default storage class defined. If there is no default storage class that is defined, set the storage class name by using the spec.persistent.storageClassName parameter. For example:
      spec:
         persistent:
           storageClassName: <storage_class_name>
    3. Optional: If you want to add custom files inside the Workflow Process Service Runtime server, you can update node.customFilePVC. The persistent volume claim (PVC) must be in ROX (ReadOnlyMany) or RWX (ReadWriteMany) access mode, otherwise HADR is affected, since all pods must be allocated to the same node and the PVC is mounted at the /opt/ibm/bawfile directory inside the container. For example, the customFilePVC might look similar to:
      spec:
        node:
           customFilePVC: my-custom-wfps-pvc
    4. Optional: If you want to enable the full text search feature, include the following lines:
      spec: 
        capabilities: 
          fullTextSearch: 
            enable: true 
            adminGroups: 
            - example_group 
            esStorage:
              storageClassName: BlockStorageClassName
              size: 50Gi
            esSnapshotStorage:
              storageClassName: BlockStorageClassName
              size: 10Gi
      If you installed Cloud Pak foundational services Elasticsearch, you don't need to add capabilities.fullTextSearch.esStorage and capabilities.fullTextSearch.esSnapshotStorage. If you didn't install Cloud Pak foundational services Elasticsearch and capabilities.fullTextSearch.enable is set to true, you must add capabilities.fullTextSearch.esStorage and capabilities.fullTextSearch.esSnapshotStorage in the custom resource yaml file. The StorageClass for Elasticsearch and Elasticsearch snapshot should create the storage type of the PVs in block mode rather than file system mode.
    5. Optional: If you want to start external services in your workflows, you need to retrieve the certificate of the external service and add it into the server trust list as a secret. For the serverTrustCertificateList parameter, enter a list of secrets, where every secret stores a trusted certificate. To create a secret, run the following command:
      kubectl create secret generic <example_ocp_external_default_certificate> --from-file=tls.crt=./cert.crt
      Your serverTrustCertificateList configuration might look similar to:
      spec:
         tls: 
           serverTrustCertificateList:
           - <example_ocp_external_default_certificate>
    6. Apply the custom resource by running the following command:
      oc apply -f <custom_resource_name>.yaml
    7. After a few minutes, verify that you see your pods, services, and route. If you chose embedded PostgreSQL, the PostgreSQL server pod and service are also listed. For example:
      [root@xxxxxxx]# oc get pod
      wfps-instance1-postgre-0 1/1 Running 0 21h
      wfps-instance1-wfps-runtime-server-0 1/1 Running 0 21h                         1/1     Running     0          21h
      [root@xxxxxx]# oc get service
      NAME                                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
      wfps-instance1-postgre-any                              ClusterIP   172.30.60.216    <none>        5432/TCP                     6d
      wfps-instance1-postgre-r                                ClusterIP   172.30.43.94     <none>        5432/TCP                     6d
      wfps-instance1-postgre-ro                               ClusterIP   172.30.234.237   <none>        5432/TCP                     6d
      wfps-instance1-postgre-rw                               ClusterIP   172.30.105.46    <none>        5432/TCP                     6d
      wfps-instance1-wfps-headless-service                      ClusterIP   None             <none>        9443/TCP                   6d
      wfps-instance1-wfps-service  
      [root@xxxxxx]# oc get route
      NAME   HOST/PORT                                          PATH   SERVICES        PORT                   TERMINATION            WILDCARD
      cpd    cpd-cp4a-project.apps.xxxxxx.cp.fyre.ibm.com          ibm-nginx-svc   ibm-nginx-https-port   passthrough/Redirect   None

Completing post-deployment tasks for Workflow Process Service Runtime

You can choose to configure LDAP or a third-party identity access and management provider.
  1. Go to the cluster in your Common UI console. To access the Common UI console, see Accessing your cluster by using the console. To configure an LDAP connection, see Configuring LDAP connection
  2. Add LDAP users in Cloud Pak Platform UI.
    1. Connect to the URL: https://cluster_address, where cluster_address is the IBM Cloud Pak console route. You can get the IBM Cloud Pak console route by running the command:
      oc get route cpd -o jsonpath='{.spec.host}' && echo
      The output might look similar to:
      cpd-namespace_name.apps.mycluster.mydomain
      Using the example output, the console URL would look similar to:
      https://cpd-namespace_name.apps.mycluster.mydomain/zen
    2. Log in to the IBM Cloud Pak dashboard and select OpenShift Container Platform authentication for kubeadmin, or log in with the IBM provided credentials from step 1a, if you are an administrator.
    3. Go to Manage users > Add users.
    4. Type the names of users that you want to add, and click Next.
    5. Assign the users to roles, or add them to a group. You can add your LDAP user under Users or you can add your LDAP user group under User groups. For both users and user groups, make sure that at least one role is selected. For example, roles include administrator, automation administrator, automation analyst, automation developer, automation operator, and user.
    6. Click Add to register the users.

Verifying your Workflow Process Service Runtime deployment

To access services provided by Workflow Process Service Runtime, you might need to log in with your LDAP user and password.
  1. Make sure your Workflow Process Service Runtime deployment is ready by running the command:
    oc get wfps <CR_name> -o=jsonpath='{.status.components.wfps.configurations[*].value}'
    The output might look similar to:
    <CR_name>-admin-client-secret Ready Ready Ready Ready
  2. To access the Workplace console, you have two options. You can run the command:
    oc get wfps <CR_name> -o=jsonpath='{.status.endpoints[2].uri}'
    Alternatively, you can manually splice the Workplace console URL:
    https://(oc get route cpd -o jsonpath="{.spec.host}")/<CR_name>-wfps/Workplace
    For example, the resulting Workplace console URL might look like:
    https://cpd-cp4a-project.apps.xxxxxx.cp.fyre.ibm.com/<CR_name>-wfps/Workplace
  3. To access the Operations REST APIs Swagger UI, you have two options. You can run the command:
    oc get wfps <CR_name> o=jsonpath='{.status.endpoints[3].uri}'
    Alternatively, you can manually splice the Operations REST APIs Swagger UI URL:
    https://(oc get route cpd -o jsonpath="{.spec.host}")/<CR_name>-wfps/ops/explorer
    For example, the resulting Operations REST APIs Swagger UI URL might look like:
    https://cpd-cp4a-project.apps.xxxxxx.cp.fyre.ibm.com/<CR_name>-wfps/ops/explorer
  4. To construct the URLs of exposed REST services and exposed web services, you must locate the endpoint of Workflow Process Service Runtime in the custom resource file's status field. To determine the URL of your REST services and web services, complete the following steps:
    1. Run the command:
      oc get wfps wfps-instance1 -o yaml
    2. In the endpoints section, locate the URI of the external Workflow Process Service Runtime instance. For example:
          - name: External Base URL
            scope: External
            type: https
            uri: https://cpd-wfps3.apps.fjk-ocp474.cp.example.com/wfps-instance1-wfps
    3. The URLs of your REST services have the following structure:
      http://host_name:port/[<custom_prefix>/]automationservices/rest/<process_app_name>/[<snapshot_name>/]<rest_service_name>/docs{}
      Where:
      • https://host_name:port/[<custom_prefix>/] is your URI value from the previous step.
      • <process_app_name> is the name of the process application.
      • <snapshot_name> is the optional name of the snapshot.
      • <rest_service_name> is the name of the REST service.
    4. The URL of your web services have the following structure:
      https://host_name:port/[<custom_prefix>/]teamworks/webservices/<process_app_name>/[<snapshot_name>/]<web_service_name>.tws
      Where:
      • https://host_name:port/[<custom_prefix>/] is your URI value from step 4b.
      • <process_app_name> is the name of the process application.
      • <snapshot_name> is the optional name of the snapshot.
      • <web_service_name> is the name of the web service.

Managing your embedded PostgreSQL server

  1. To access data in your PostgreSQL server:
    1. Run the command oc get cluster to get the PostgreSQL cluster name. For example, the cluster name might be similar to: wfps-instance1-postgre.
    2. Run the command kubectl port-forward --address 0.0.0.0 wfps-instance1-postgre-rw 5432:5432 on the OpenShift Container Platform infrastructure node. The infrastructure node IP and port (5432) are the database server and database port that are externally accessible.
    3. Get the username and password from the wfps-instance1-postgre-app secret to access the default database wfpsdb. To expose more PostgreSQL services, see Exposing Postgres Services.
  2. Check your license.
    1. Run the command oc get cluster to get the PostgreSQL cluster name. For example, the cluster name might be similar to wfps-instance1-postgre.
    2. Run the command oc get cluster <cluster_name> -o yaml to check the license status. The output might look like:
       licenseStatus:
            isTrial: true
            licenseExpiration: "2024-10-01T00:00:00Z"
            licenseStatus: Valid license (IBM - Data & Analytics (Cloud))
            repositoryAccess: false
            valid: true
  3. To configure backup and recovery for PostgreSQL, see Backup and Recovery.
  4. You can configure the operator's management of the EDB PostgreSQL cluster.
    1. If you want to manage the embedded PostgreSQL cluster, you need to update the value of spec.database.managed.managementState to Unmanaged in the Workflow Process Service Runtime custom resource yaml file. After you update the value of spec.database.managed.managementState, the Workflow Process Service Runtime operator will not manage the embedded PostgreSQL cluster. To change the parameters and resources of the PostgreSQL cluster, see PostgreSQL Configuration and Resource management. To add nodeSelector and select the nodes that a pod can run on, see Node selection through nodeSelector.
      When you are in the Unmanaged state, you need to manually delete the PostgreSQL cluster after you delete the Workflow Process Service Runtime instance. For example, to delete your cluster, your command might look similar to:
      oc delete cluster wfps-instance1-postgre
      where wfps-instance1-postgre is the name of your PostgreSQL cluster.
    2. If you want the Workflow Process Service Runtime operator to manage the EDB PostgreSQL cluster, set spec.database.managed.managementState to Managed. The PostgreSQL cluster will have the default configuration and the PostgreSQL cluster will be deleted automatically after the Workflow Process Service Runtime instance is deleted.