Installing the capabilities by running the deployment script
Depending on the capabilities that you want to install, the deployment script prepares the environment and then installs the selected automation containers.
Before you begin
The following information is needed before you run the script.
- A list of the capabilities that you want to install.
- A key for the IBM Entitled Registry.
- The route hostname. You can run the following command to get the
name.
oc get route console -n openshift-console -o yaml|grep routerCanonicalHostname
- Storage class name for the dynamic storage provisioner.
About this task
To install capabilities, a non-administrator user uses a deployment script. The script applies a custom resource (CR) file, which is deployed by the Cloud Pak operator. The deployment script prompts the user to enter values to get access to the container images and to select what is installed with the deployment.
kubectl
command line if you want to run the steps manually.Procedure
Results
The operator reconciliation loop can take some time. You must verify that the automation containers are running.
smallIBM Automation foundation deployment is used. For more information about the sizing for foundational services, see Deployment profiles.
What to do next
- Monitor the status of your pods from the command line. Using the OpenShift
CLI:
oc get pods -w
- When all of the pods are "Running", you can access the status of your services with the
following OCP CLI command.
oc status
When all the demo deployment pods are running, you can start to use your demo environment. In addition to the selected capabilities, Db2 and OpenLDAP are installed.
The Db2 database is used by the capabilities and is not accessible by an administrator or non-administrator user. The Db2U database is specific to the demo environment and is not supported in an enterprise deployment. LDAP is used for the user registry and includes some sample users.
You can run the post deployment script to access the services after the cp4a-deployment.sh script is run.
- Go to the
cert-kubernetes
directory on your local machine.cd cert-kubernetes
- Log in to the cluster with the non-administrator user. Using the OpenShift
CLI.
oc login -u cp4a-user -p cp4a-pwd
- Run the post deployment script.
cd scripts ./cp4a-post-deployment.sh
The script outputs the routes that are created and the user credentials that you need to log in to the web applications to get started.
After you have the routes and admin user information, check to see whether you need to do the following tasks.
Log in to the Zen UI
Business Automation Studio leverages the IBM Cloud Pak Platform UI (Zen UI) to provide a role-based user interface for all Cloud Pak capabilities. Capabilities are dynamically available in the UI based on the role of the user that logs in. The URL for the Zen UI is listed in the output of the post deployment script.
You have three authentication types in the login page: Enterprise LDAP,
OpenShift authentication, and IBM provided credentials (admin
only). Click Enterprise LDAP and enter the
cp4admin
user and the password in the cp4ba-access-info
ConfigMap.
The cp4admin user has access to Business Automation Studio features. You can get the details for the
IBM provided admin user by getting the contents of the
platform-auth-idp-credentials secret.
oc -n ibm-common-services get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_password}' | base64 -d
You must use the IBM provided credentials (admin only) option to log in with the internal "admin" user.
If you want to add more users, you need to log in with the Zen UI administrator. The kubeadmin user in the OpenShift authentication and the IBM provided admin user have the Zen UI administrator role. When logged in, you can add users to the Automation Developer role to enable users and user groups to access Business Automation Studio and work with business applications and business automations. For more information, see Completing post-deployment tasks for Business Automation Studio.
Using the LDAP user registry
The LDAP server comes with a set of predefined users and groups to use with your demo environment. Changes to the user repository are not persisted after a pod restart. To log in and view the users, follow these steps.
- Get the <deployment-name> of the deployment. The
<deployment-name> is the name of the metadata.name
parameter in the CR that you installed. Using the OpenShift
CLI.
oc get icp4acluster
- The LDAP admin user is "
cn=admin,dc=example,dc=org
". To get the LDAP admin password for the LDAP admin user, run the following OpenShift CLI command.oc get secret <deployment-name>-openldap-secret -o jsonpath="{.data.LDAP_ADMIN_PASSWORD}" | base64 -d
- Access the LDAP admin client by using
<deployment-name>-phpldapadmin-route
to view the users and groups. You can also review the predefined users and groups with the following command.oc get cm <deployment-name>-openldap-customldif -o yaml
To provide a user for Task Manager, the following LDAP users and groups are created by the script.- User names:
cp4admin
,user1
,user2
, up to and includinguser10
. - Group names:
TaskAdmins
,TaskUsers
, andTaskAuditors
.
The
cp4admin
user is assigned to "TaskAdmins
". The LDAP usersuser1
-user5
are assigned to "TaskUsers
", and the usersuser6
-user10
are assigned to "TaskAuditors
". - User names:
Enabling GraphQL integrated development environments for FileNet® Content Manager
The GraphiQL integrated development environment is not enabled by default because of a security risk. If you want to include this capability in your demo environment, you can add the parameter to enable the IDE.
- Find the generated YAML file in the directory where you ran the deployment script. For example, generated-cr/ibm_cp4a_cr_final.yaml.
- Add the following parameter to the
file:
graphql: graphql_production_setting: enable_graph_iql: true
- Apply the updated custom resource YAML file.
In the next reconciliation loop, the operator picks up the change, and includes GraphiQL with your deployment.
Importing sample data for Business Automation Insights
If you selected IBM Business Automation Insights as an optional component, then you can test and explore the component by importing sample data. For more information, see https://github.com/icp4a/bai-data-samples.
Enabling Business Automation Insights for FileNet Content Manager
If you selected Business Automation Insights as an optional component and included the Content Event Emitter in your deployment, you must update the deployment to add the Kafka certificate to the trusted certificate list.
- Create a secret with your Kafka certificate, for
example:
kubectl create secret generic eventstreamsecret --from-file=tls.crt=eventstream.crt
- Find the generated YAML file in the directory where you ran the deployment script. For example, generated-cr/ibm_cp4a_cr_final.yaml.
- Update the
trusted_certificate_list
parameter to include the secret that you created.shared_configuration: trusted_certificate_list: ['eventstreamsecret']
If other certificates are in the list, use a comma to separate your new entry.
- Apply the updated custom resource YAML file.
For 21.0.1 Loading sample data for Automation Document Processing
For 21.0.1 If you installed the Document Processing pattern, you must load the database with sample data before you use the Document Processing components.
Before you begin, go to the samples repository, download the import.tar.xz file from the ACA/DB2/imports folder to the host machine that you use to connect to OpenShift, and then extract the files.
tar -xvf import.tar.xz
- On the host machine, navigate to the folder that contains the imports folder that you created.
- If you are not logged in to OCP, log in and bind to the project that you used to install your deployment.
- Copy the imports folder to the DB2 container by
running the following
command.
oc cp imports db2u-release-db2u-0:/mnt/blumeta0/home/db2inst1/DB2
- Open a command shell.
oc rsh db2u-release-db2u-0
- Change to the DB2 directory and update
permissions.
cd /mnt/blumeta0/home/db2inst1/DB2 && chown -R db2inst1:db2iadm1 imports
- Change to the
db2inst1
user.su db2inst1
- Run the script to load the first ontology set.
./LoadDefaultData.sh
When prompted, provide the database name as
CP4ADB
and the Ontology name asONT1
. - Run the script to load the second ontology
set.
./LoadDefaultData.sh
When prompted, provide the database name as
CP4ADB
and the Ontology name asONT2
.