Installing the capabilities by running the deployment script

Depending on the capabilities that you want to install, the deployment script prepares the environment and then installs the selected automation containers.

Before you begin

The following information is needed before you run the script.

  • A list of the capabilities that you want to install.
  • A key for the IBM Entitled Registry.
  • The route hostname. You can run the following command to get the name.
    oc get route console -n openshift-console -o yaml|grep routerCanonicalHostname
  • Storage class name for the dynamic storage provisioner.

About this task

To install capabilities, a non-administrator user uses a deployment script. The script applies a custom resource (CR) file, which is deployed by the Cloud Pak operator. The deployment script prompts the user to enter values to get access to the container images and to select what is installed with the deployment.

Note: The deployment script copies a custom resource (CR) template file for each capability. The template names include "demo" and are found in the descriptors/patterns folder. The CR files are configured by the deployment script. However, you can copy these templates, configure them by hand, and apply the file from the kubectl command line if you want to run the steps manually.

Procedure

  1. Log in to the cluster with the cluster administrator used in Setting up the cluster with the admin script or a non-administrator user who has access to the project.
    Using the OpenShift CLI:
    oc login -u cp4a-user -p cp4a-pwd
  2. If you did not do it already, download the cert-kubernetes GitHub repository on your local machine and go to the cert-kubernetes directory.
    For more information about downloading cert-kubernetes, see Preparing for a Demo deployment.
  3. View the list of projects on your cluster and change project to the target project before you run the deployment script.
    oc get projects
    oc project <project_name>

    The specified project is then used in all subsequent operations that manipulate project-scoped content.

  4. Run the deployment script from the local directory where you downloaded the cert-kubernetes repository, and follow the prompts in the command window.
    cd cert-kubernetes/scripts
    ./cp4a-deployment.sh
    The script prompts you to enter the relevant information for your evaluation deployment.
    1. Accept the license.
    2. Select a new installation type.
    3. Select the demo deployment type.
    4. Select OCP or ROKS.
    5. Choose the capabilities and optional components that you want to install.
      Tip: When prompted, press the Enter key to skip optional components. For more information about the capabilities and their dependencies, see Capabilities for Demo deployments.
    6. For 21.0.1 Enter the route hostname.
    7. Enter the storage class for your dynamic storage provisioner.

      If you selected ROKS as the platform, provide 3 storage class names for slow, medium, and fast. You must select cp4a-file-retain-gold-gid for the fast dynamic storage class name. This class provides fast storage for some of the components such as Db2, which can prevent unnecessary operator retries.

Results

The operator reconciliation loop can take some time. You must verify that the automation containers are running.

Restriction: The deployment script generates a final CR file that is deployed. Any changes to the deployment must be done by running the script again. Customization of the generated CR file is not supported for a demo deployment, except when it is documented.
Note: For 21.0.1-IF002 or later, a small IBM Automation foundation deployment is used. For more information about the sizing for foundational services, see Deployment profiles.

What to do next

  1. Monitor the status of your pods from the command line. Using the OpenShift CLI:
    oc get pods -w
  2. When all of the pods are "Running", you can access the status of your services with the following OCP CLI command.
    oc status

When all the demo deployment pods are running, you can start to use your demo environment. In addition to the selected capabilities, Db2 and OpenLDAP are installed.

The Db2 database is used by the capabilities and is not accessible by an administrator or non-administrator user. The Db2U database is specific to the demo environment and is not supported in an enterprise deployment. LDAP is used for the user registry and includes some sample users.

You can run the post deployment script to access the services after the cp4a-deployment.sh script is run.

  1. Go to the cert-kubernetes directory on your local machine.
    cd cert-kubernetes
  2. Log in to the cluster with the non-administrator user. Using the OpenShift CLI.
    oc login -u cp4a-user -p cp4a-pwd
  3. Run the post deployment script.
    cd scripts
    ./cp4a-post-deployment.sh

    The script outputs the routes that are created and the user credentials that you need to log in to the web applications to get started.

After you have the routes and admin user information, check to see whether you need to do the following tasks.

Note: If the capabilities that you installed include Business Automation Navigator (BAN) and the User Management Service (UMS), then you need to configure the Single Sign-On (SSO) logout for the Admin desktop. For more information, see Configuring SSO logout between BAN and UMS.

Log in to the Zen UI

Business Automation Studio leverages the IBM Cloud Pak Platform UI (Zen UI) to provide a role-based user interface for all Cloud Pak capabilities. Capabilities are dynamically available in the UI based on the role of the user that logs in. The URL for the Zen UI is listed in the output of the post deployment script.

You have three authentication types in the login page: Enterprise LDAP, OpenShift authentication, and IBM provided credentials (admin only). Click Enterprise LDAP and enter the cp4admin user and the password in the cp4ba-access-info ConfigMap. The cp4admin user has access to Business Automation Studio features. You can get the details for the IBM provided admin user by getting the contents of the platform-auth-idp-credentials secret.

oc -n ibm-common-services get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_password}' | base64 -d

You must use the IBM provided credentials (admin only) option to log in with the internal "admin" user.

If you want to add more users, you need to log in with the Zen UI administrator. The kubeadmin user in the OpenShift authentication and the IBM provided admin user have the Zen UI administrator role. When logged in, you can add users to the Automation Developer role to enable users and user groups to access Business Automation Studio and work with business applications and business automations. For more information, see Completing post-deployment tasks for Business Automation Studio.

Using the LDAP user registry

The LDAP server comes with a set of predefined users and groups to use with your demo environment. Changes to the user repository are not persisted after a pod restart. To log in and view the users, follow these steps.

  1. Get the <deployment-name> of the deployment. The <deployment-name> is the name of the metadata.name parameter in the CR that you installed. Using the OpenShift CLI.
    oc get icp4acluster
  2. The LDAP admin user is "cn=admin,dc=example,dc=org". To get the LDAP admin password for the LDAP admin user, run the following OpenShift CLI command.
    oc get secret <deployment-name>-openldap-secret -o jsonpath="{.data.LDAP_ADMIN_PASSWORD}" | base64 -d
  3. Access the LDAP admin client by using <deployment-name>-phpldapadmin-route to view the users and groups. You can also review the predefined users and groups with the following command.
    oc get cm <deployment-name>-openldap-customldif -o yaml
    To provide a user for Task Manager, the following LDAP users and groups are created by the script.
    1. User names: cp4admin, user1, user2, up to and including user10.
    2. Group names: TaskAdmins, TaskUsers, and TaskAuditors.

    The cp4admin user is assigned to "TaskAdmins". The LDAP users user1 - user5 are assigned to "TaskUsers", and the users user6 - user10 are assigned to "TaskAuditors".

Enabling GraphQL integrated development environments for FileNet® Content Manager

The GraphiQL integrated development environment is not enabled by default because of a security risk. If you want to include this capability in your demo environment, you can add the parameter to enable the IDE.

  1. Find the generated YAML file in the directory where you ran the deployment script. For example, generated-cr/ibm_cp4a_cr_final.yaml.
  2. Add the following parameter to the file:
    graphql:
          graphql_production_setting:
            enable_graph_iql: true
  3. Apply the updated custom resource YAML file.

    In the next reconciliation loop, the operator picks up the change, and includes GraphiQL with your deployment.

Importing sample data for Business Automation Insights

If you selected IBM Business Automation Insights as an optional component, then you can test and explore the component by importing sample data. For more information, see https://github.com/icp4a/bai-data-samples.

Enabling Business Automation Insights for FileNet Content Manager

If you selected Business Automation Insights as an optional component and included the Content Event Emitter in your deployment, you must update the deployment to add the Kafka certificate to the trusted certificate list.

  1. Create a secret with your Kafka certificate, for example:
    kubectl create secret generic eventstreamsecret --from-file=tls.crt=eventstream.crt
  2. Find the generated YAML file in the directory where you ran the deployment script. For example, generated-cr/ibm_cp4a_cr_final.yaml.
  3. Update the trusted_certificate_list parameter to include the secret that you created.
    shared_configuration:
          trusted_certificate_list: ['eventstreamsecret']

    If other certificates are in the list, use a comma to separate your new entry.

  4. Apply the updated custom resource YAML file.

For 21.0.1 Loading sample data for Automation Document Processing

For 21.0.1 If you installed the Document Processing pattern, you must load the database with sample data before you use the Document Processing components.

Before you begin, go to the samples repository, download the import.tar.xz file from the ACA/DB2/imports folder to the host machine that you use to connect to OpenShift, and then extract the files.

tar -xvf import.tar.xz
  1. On the host machine, navigate to the folder that contains the imports folder that you created.
  2. If you are not logged in to OCP, log in and bind to the project that you used to install your deployment.
  3. Copy the imports folder to the DB2 container by running the following command.
    oc cp imports db2u-release-db2u-0:/mnt/blumeta0/home/db2inst1/DB2
  4. Open a command shell.
    oc rsh db2u-release-db2u-0
  5. Change to the DB2 directory and update permissions.
    cd /mnt/blumeta0/home/db2inst1/DB2 && chown -R db2inst1:db2iadm1 imports
  6. Change to the db2inst1 user.
    su db2inst1
  7. Run the script to load the first ontology set.
    ./LoadDefaultData.sh

    When prompted, provide the database name as CP4ADB and the Ontology name as ONT1.

  8. Run the script to load the second ontology set.
    ./LoadDefaultData.sh

    When prompted, provide the database name as CP4ADB and the Ontology name as ONT2.

Important: The demo deployment provides one project database for the Automation Document Processing capability. Therefore, you can create only one Document Processing project.