Installing the capabilities in Operator Hub
If you want to select the capabilities to install and use only the default values, then it is easier to do that in the Form View in the IBM operator catalog.
Before you begin
- Log in to your OCP or ROKS cluster.
- In the Installed Operators view, verify the status of the IBM Cloud Pak for Business Automation operator installation reads succeeded, and verify the deployment by checking all of the pods are running.
- On Red Hat OpenShift Kubernetes Service (ROKS) only, apply the no root squash
command for
Db2.
oc get no -l node-role.kubernetes.io/worker --no-headers -o name | xargs -I {} \ -- oc debug {} \ -- chroot /host sh -c 'grep "^Domain = slnfsv4.coms" /etc/idmapd.conf || ( sed -i "s/.*Domain =.*/Domain = slnfsv4.com/g" /etc/idmapd.conf; nfsidmap -c; rpc.idmapd )'
Procedure
Results
Check to make sure that the icp4ba
cartridge in the IBM Automation
Foundation Core is ready. For more information about IBM Automation Foundation, see What is IBM Automation foundation?
smallIBM Automation foundation deployment is used. For more information about the sizing for foundational services, see Deployment profiles.
To view the status of the icp4ba
cartridge in the OCP Admin console, click . Click the Cartridge tab, click icp4ba,
and then scroll to the Conditions section.
How to access the capability services
When the deployment is successful, a ConfigMap is created in the
namespace (project) to provide the cluster-specific details to access the services and applications.
The ConfigMap name is prefixed with the deployment name (default is icp4adeploy
).
You can search for the routes with a filter on "cp4ba-access-info
".
The contents of the ConfigMap depends on the components that are included. Each component has one or more URLs, and if needed a username and password.
<component1> URL: <RouteUrlToAccessComponent1>
<component1> Credentials: <UserName>/<Password> (optional)
<component2> URL: <RouteUrlToAccessComponent2>
<component2> Credentials: <UserName>/<Password> (optional)
If you installed
21.0.1 without an interim fix, you must go to the routes panel and open the routes with the
corresponding names for Operational Decision Manager. The username and password is odmAdmin/odmAdmin
.
- Decision Server Console:
<meta_name>-odm-ds-console-route
- Decision Runner:
<meta_name>-odm-dr-route
- Decision Center:
<meta_name>-odm-dc-route
- Decision Server Runtime:
<meta_name>-odm-ds-runtime-route
What to do next
After you have the routes and admin user information, check to see whether you need to do the following tasks.
true
or false
values in the Form
View, but the other parameters need to be done in the YAML View. You
can access the custom resource from the YAML tab, or by clicking .
Log in to the Zen UI
Business Automation Studio leverages the IBM Cloud Pak Platform UI (Zen UI) to provide a role-based user interface for all Cloud Pak capabilities. Capabilities are dynamically available in the UI based on the role of the user that logs in. You can find the URL for the Zen UI by clicking cpd, or by running the following command.
and looking for the nameoc get route |grep "^cpd"
You have three authentication types in the login page: Enterprise LDAP,
OpenShift authentication, and IBM provided credentials (admin
only). Click Enterprise LDAP and enter the
cp4admin
user and the password in the cp4ba-access-info
ConfigMap.
The cp4admin
user has access to Business Automation Studio features. You can get
the details for the IBM provided admin user by getting the contents of the
platform-auth-idp-credentials secret.
oc -n ibm-common-services get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_password}' | base64 -d
You must use the IBM provided credentials (admin only) option to log in with the internal "admin" user.
When logged in, you can add users to the Automation Developer role to enable users and user groups to access Business Automation Studio and work with business applications and business automations.
If you want to add more users, you need to log in with the Zen UI administrator. The kubeadmin user in the OpenShift authentication and the IBM provided admin user have the Zen UI administrator role. When logged in, you can add users to the Automation Developer role to enable users and user groups to access Business Automation Studio and work with business applications and business automations. For more information, see Completing post-deployment tasks for Business Automation Studio.
Using the LDAP user registry
The LDAP server comes with a set of predefined users and groups to use with your demo environment. Changes to the user repository are not persisted after a pod restart. To log in and view the users, follow these steps.
- In the OCP console, select the project in which you deployed the Cloud Pak, and then click To provide a user for Task Manager, the following LDAP users and groups are created by the deployment.
- User names:
cp4admin
,user1
,user2
, up to and includinguser10
. - Group names:
TaskAdmins
,TaskUsers
, andTaskAuditors
.
The
cp4admin
user is assigned to "TaskAdmins
". The LDAP usersuser1
-user5
are assigned to "TaskUsers
", and the usersuser6
-user10
are assigned to "TaskAuditors
".
. - User names:
Enabling GraphQL integrated development environments for FileNet Content Manager
The GraphiQL integrated development environment is not enabled by default because of a security risk. If you want to include this capability in your demo environment, you can add the parameter to enable the IDE.
- Click YAML to go into the YAML view. , then click
- Add the following parameter to the
file:
graphql: graphql_production_setting: enable_graph_iql: true
- Apply the updated custom resource YAML file.
In the next reconciliation loop, the operator picks up the change, and includes GraphiQL with your deployment.
Importing sample data for Business Automation Insights
If you selected IBM Business Automation Insights as an optional component, then you can test and explore the component by importing sample data. For more information, see https://github.com/icp4a/bai-data-samples.
Enabling Business Automation Insights for FileNet Content Manager
If you selected Business Automation Insights as an optional component and included the Content Event Emitter in your deployment, you must update the deployment to add the Kafka certificate to the trusted certificate list.
- Create a secret with your Kafka certificate, for
example:
kubectl create secret generic eventstreamsecret --from-file=tls.crt=eventstream.crt
- Find the generated YAML file in the directory where you ran the deployment script. For example, generated-cr/ibm_cp4a_cr_final.yaml.
- Update the
trusted_certificate_list
parameter to include the secret that you created.shared_configuration: trusted_certificate_list: ['eventstreamsecret']
If other certificates are in the list, use a comma to separate your new entry.
- Apply the updated custom resource YAML file.
Loading sample data for Automation Document Processing
For 21.0.1 If you installed the Document Processing pattern, you must load the database with sample data before you use the Document Processing components.
Before you begin, go to the samples repository, download the import.tar.xz file from the ACA/DB2/imports folder to the host machine that you use to connect to OpenShift, and then extract the files.
tar -xvf import.tar.xz
- On the host machine that you use for connecting to OpenShift, navigate to the folder that contains the imports folder that you created.
- If you are not logged in to OCP, log in and bind to the project that you used to install your deployment.
- Copy the imports folder to the DB2 container by
running the following
command.
oc cp imports db2u-release-db2u-0:/mnt/blumeta0/home/db2inst1/DB2
- Open a command shell.
oc rsh db2u-release-db2u-0
- Change to the DB2 directory and update
permissions.
cd /mnt/blumeta0/home/db2inst1/DB2 && chown -R db2inst1:db2iadm1 imports
- Change to the
db2inst1
user.su db2inst1
- Run the script to load the first ontology set.
./LoadDefaultData.sh
When prompted, provide the database name as
CP4ADB
and the Ontology name asONT1
. - Run the script to load the second ontology
set.
./LoadDefaultData.sh
When prompted, provide the database name as
CP4ADB
and the Ontology name asONT2
.