Installing the capabilities in Operator Hub

If you want to select the capabilities to install and use only the default values, then it is easier to do that in the Form View in the IBM operator catalog.

Before you begin

  1. Log in to your OCP or ROKS cluster.
  2. In the Installed Operators view, verify the status of the IBM Cloud Pak for Business Automation operator installation reads succeeded, and verify the deployment by checking all of the pods are running.

    Operator installation succeeded

  3. On Red Hat OpenShift Kubernetes Service (ROKS) only, apply the no root squash command for Db2.
    oc get no -l node-role.kubernetes.io/worker --no-headers -o name | xargs -I {} \
       -- oc debug {} \
       -- chroot /host sh -c 'grep "^Domain = slnfsv4.coms" /etc/idmapd.conf || ( sed -i "s/.*Domain =.*/Domain = slnfsv4.com/g" /etc/idmapd.conf; nfsidmap -c; rpc.idmapd )'

Procedure

  1. Use the operator instance to apply a custom resource by clicking CP4BA deployment > Create Instance.
  2. In the Form View of the deployment editor, enter the values for everything that you want to include in your deployment.
    1. Enter a Name, or use the default icp4adeploy.
    2. Enter the appVersion 21.0.x.
    3. Accept the License by setting the value to true.
    4. Open the Shared Configuration section and enter values for the following parameters. For more information on the shared parameters, see Shared configuration parameters.
      Table 1. Shared configuration parameters
      Shared configuration parameters Values
      Hostname suffix A routing subdomain is generated by default.

      For 21.0.1 You must add a valid hostname. For example, <namespace>.myhostname.

      Purchased CP4BA license Set to non-production.
      Platform Set to OCP or ROKS.
      For 21.0.1 Image repository The default is cp.icr.io.
      For 21.0.1 Image pull secrets The default is admin.registrykey.
      root_ca_secret The default is icp4a-root-ca.
      external_tls_certificate_secret Leave the value empty to sign all external routes with the root_ca_secret.
      Content initialization Keep the default value.
      Content verification Keep the default value.
      Trusted certificate list Leave blank to generate a self-signed signer certificate.
      Purchased FNCM license If you set IBM FileNet® Content Manager to true, set to "non-production". Otherwise, leave the value empty.
      Purchased BAW license If you set IBM Business Automation Workflow to true, set to "non-production". Otherwise, leave the value empty.
      Storage configuration Select a dynamic storage class from the list under Storage for Demo. For example, you might have csi-cephfs, managed-nfs-storage, rook-ceph-block, and rook-cephfs available on your cluster. If you set ROKS for the deployment platform, select the ibmc-file-gold-gid storage class.
      Admin user Leave blank.
      Important: Filling in this field can make the demo deployment unusable.
    5. Set Deployment Type to demo.
    6. Select the capabilities that you want to include.
      Tip: If you do not want to include a capability, leave the value as false. For more information about the capabilities and their dependencies, see Capabilities for Demo deployments.
      • FileNet Content Manager
      • Business Automation Application
      • Operational Decision Manager
      • Automation Decision Services
      • Automation Document Processing
      • Automation Workflow/Workstream Services
    7. Open the Advanced Configuration section, choose whether to include Business Automation Insights and enter valid values for the parameters of the selected capabilities in the list.
      Restriction: Due to a limitation in the Form View, the repo_service_url parameter in Content Manager (FileNet) is still visible when Automation Document Processing (ADP) Runtime is set to false. You do not need to set a value for this configuration parameter if you do not want to include ADP.
      Tip: You can copy and paste parameters from the cert-kubernetes custom resource demo templates in the YAML View and edit the parameters. For more information about downloading cert-kubernetes, see Preparing for a Demo deployment. You can edit the CR file in the editor, but it is best if you have the CR complete and verified before you save your changes in the editor. For example, go to http://www.yamllint.com/ to verify the contents of your file.

      For more information about the olm_ configuration parameters that enable you to switch between the Form View and the YAML View, see Business Automation configuration parameters for Operator Hub.

    8. Identify GPU enabled nodes for Automation Document Processing, if applicable.

      DB2 pods cannot be deployed on GPU enabled nodes. If your cluster has GPU enabled nodes, identify the nodes in the YAML view.

      Click YAML View, and add the following parameter to the file, in the shared_configuration section:

      node_labels:
            gpu_enabled: true
            gpu_nodelabel_key: "<string like nvidia.com/gpu.present>"

      The value for the gpu_nodelabel_key is the unique node label key and value on the GPU node.

  3. When you are ready, click Create.

Results

Check to make sure that the icp4ba cartridge in the IBM Automation Foundation Core is ready. For more information about IBM Automation Foundation, see What is IBM Automation foundation?

Note: For 21.0.1-IF002 or later, a small IBM Automation foundation deployment is used. For more information about the sizing for foundational services, see Deployment profiles.

To view the status of the icp4ba cartridge in the OCP Admin console, click Operators > Installed Operators > IBM Automation Foundation Core. Click the Cartridge tab, click icp4ba, and then scroll to the Conditions section.

Conditions list

How to access the capability services

When the deployment is successful, a ConfigMap is created in the namespace (project) to provide the cluster-specific details to access the services and applications. The ConfigMap name is prefixed with the deployment name (default is icp4adeploy). You can search for the routes with a filter on "cp4ba-access-info".

The contents of the ConfigMap depends on the components that are included. Each component has one or more URLs, and if needed a username and password.

<component1> URL: <RouteUrlToAccessComponent1> 
<component1> Credentials: <UserName>/<Password> (optional) 
<component2> URL: <RouteUrlToAccessComponent2> 
<component2> Credentials: <UserName>/<Password> (optional) 

If you installed 21.0.1 without an interim fix, you must go to the routes panel and open the routes with the corresponding names for Operational Decision Manager. The username and password is odmAdmin/odmAdmin.

  • Decision Server Console: <meta_name>-odm-ds-console-route
  • Decision Runner: <meta_name>-odm-dr-route
  • Decision Center: <meta_name>-odm-dc-route
  • Decision Server Runtime: <meta_name>-odm-ds-runtime-route

What to do next

After you have the routes and admin user information, check to see whether you need to do the following tasks.

Tip: If you want or need to update values in a Demo deployment that you made in the Form View, you must edit the deployment in the YAML View. You can edit true or false values in the Form View, but the other parameters need to be done in the YAML View. You can access the custom resource from the YAML tab, or by clicking Actions > Edit ICP4ACluster.

YAML view

Note: If the capabilities that you installed include Business Automation Navigator (BAN) and the User Management Service (UMS), then you need to configure the Single Sign-On (SSO) logout for the Admin desktop. For more information, see Configuring SSO logout between BAN and UMS.

Log in to the Zen UI

Business Automation Studio leverages the IBM Cloud Pak Platform UI (Zen UI) to provide a role-based user interface for all Cloud Pak capabilities. Capabilities are dynamically available in the UI based on the role of the user that logs in. You can find the URL for the Zen UI by clicking Network > Routes and looking for the name cpd, or by running the following command.

oc get route |grep "^cpd"

You have three authentication types in the login page: Enterprise LDAP, OpenShift authentication, and IBM provided credentials (admin only). Click Enterprise LDAP and enter the cp4admin user and the password in the cp4ba-access-info ConfigMap. The cp4admin user has access to Business Automation Studio features. You can get the details for the IBM provided admin user by getting the contents of the platform-auth-idp-credentials secret.

oc -n ibm-common-services get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_password}' | base64 -d

You must use the IBM provided credentials (admin only) option to log in with the internal "admin" user.

When logged in, you can add users to the Automation Developer role to enable users and user groups to access Business Automation Studio and work with business applications and business automations.

If you want to add more users, you need to log in with the Zen UI administrator. The kubeadmin user in the OpenShift authentication and the IBM provided admin user have the Zen UI administrator role. When logged in, you can add users to the Automation Developer role to enable users and user groups to access Business Automation Studio and work with business applications and business automations. For more information, see Completing post-deployment tasks for Business Automation Studio.

Using the LDAP user registry

The LDAP server comes with a set of predefined users and groups to use with your demo environment. Changes to the user repository are not persisted after a pod restart. To log in and view the users, follow these steps.

  1. In the OCP console, select the project in which you deployed the Cloud Pak, and then click Workloads > Secrets > <deployment-name>-openldap-customldif > Data > Reveal Values.
    To provide a user for Task Manager, the following LDAP users and groups are created by the deployment.
    1. User names: cp4admin, user1, user2, up to and including user10.
    2. Group names: TaskAdmins, TaskUsers, and TaskAuditors.

    The cp4admin user is assigned to "TaskAdmins". The LDAP users user1 - user5 are assigned to "TaskUsers", and the users user6 - user10 are assigned to "TaskAuditors".

Enabling GraphQL integrated development environments for FileNet Content Manager

The GraphiQL integrated development environment is not enabled by default because of a security risk. If you want to include this capability in your demo environment, you can add the parameter to enable the IDE.

  1. Click Actions > Edit ICP4ACluster, then click YAML to go into the YAML view.
  2. Add the following parameter to the file:
    graphql:
          graphql_production_setting:
            enable_graph_iql: true
  3. Apply the updated custom resource YAML file.

    In the next reconciliation loop, the operator picks up the change, and includes GraphiQL with your deployment.

Importing sample data for Business Automation Insights

If you selected IBM Business Automation Insights as an optional component, then you can test and explore the component by importing sample data. For more information, see https://github.com/icp4a/bai-data-samples.

Enabling Business Automation Insights for FileNet Content Manager

If you selected Business Automation Insights as an optional component and included the Content Event Emitter in your deployment, you must update the deployment to add the Kafka certificate to the trusted certificate list.

  1. Create a secret with your Kafka certificate, for example:
    kubectl create secret generic eventstreamsecret --from-file=tls.crt=eventstream.crt
  2. Find the generated YAML file in the directory where you ran the deployment script. For example, generated-cr/ibm_cp4a_cr_final.yaml.
  3. Update the trusted_certificate_list parameter to include the secret that you created.
    shared_configuration:
          trusted_certificate_list: ['eventstreamsecret']

    If other certificates are in the list, use a comma to separate your new entry.

  4. Apply the updated custom resource YAML file.

Loading sample data for Automation Document Processing

For 21.0.1 If you installed the Document Processing pattern, you must load the database with sample data before you use the Document Processing components.

Before you begin, go to the samples repository, download the import.tar.xz file from the ACA/DB2/imports folder to the host machine that you use to connect to OpenShift, and then extract the files.

tar -xvf import.tar.xz
  1. On the host machine that you use for connecting to OpenShift, navigate to the folder that contains the imports folder that you created.
  2. If you are not logged in to OCP, log in and bind to the project that you used to install your deployment.
  3. Copy the imports folder to the DB2 container by running the following command.
    oc cp imports db2u-release-db2u-0:/mnt/blumeta0/home/db2inst1/DB2
  4. Open a command shell.
    oc rsh db2u-release-db2u-0
  5. Change to the DB2 directory and update permissions.
    cd /mnt/blumeta0/home/db2inst1/DB2 && chown -R db2inst1:db2iadm1 imports
  6. Change to the db2inst1 user.
    su db2inst1
  7. Run the script to load the first ontology set.
    ./LoadDefaultData.sh

    When prompted, provide the database name as CP4ADB and the Ontology name as ONT1.

  8. Run the script to load the second ontology set.
    ./LoadDefaultData.sh

    When prompted, provide the database name as CP4ADB and the Ontology name as ONT2.