Installing the capabilities by running the deployment script

Depending on the capabilities that you want to install, the deployment script prepares the environment and then installs the selected automation containers.

Before you begin

The following information is needed before you run the script.

  • A list of the capabilities that you want to install.
  • A key for the IBM Entitled Registry.
  • The route hostname. You can run the following command to get the name.
    oc get route console -n openshift-console -o yaml|grep routerCanonicalHostname
  • Storage class name for block storage provisioned.

About this task

To install the capabilities, a non-administrator user can run the deployment script. The script applies a custom resource (CR) file, which is deployed by the Cloud Pak operator. The deployment script prompts the user to enter values to get access to the container images and to select what is installed with the deployment.

Procedure

  1. Log in to the cluster with the cluster administrator that you used in Setting up the cluster with the admin script or a non-administrator user who has access to the project.
    Using the OpenShift CLI:
    oc login -u cp4a-user -p cp4a-pwd
  2. If you did not do it already, download the cert-kubernetes GitHub repository to a supported VM/machine and go to the cert-kubernetes directory.
    For more information about downloading cert-kubernetes, see Preparing for a starter deployment.
  3. View the list of projects in your cluster and change project to the target project before you run the deployment script.
    oc get projects
    oc project ${NAMESPACE}
    Note: If you used the All namespaces option to install the Cloud Pak operator, change the scope to the project that you created for your deployment (cp4ba-starter).

    The specified project is used in all subsequent operations that manipulate project-scoped content.

  4. Run the deployment script from the local directory where you downloaded the cert-kubernetes repository, and follow the prompts in the command window.
    cd cert-kubernetes/scripts
    ./cp4a-deployment.sh -n ${NAMESPACE}
    The script prompts you to enter the relevant information for your evaluation deployment.
    1. Accept the license. You must agree to the license that is displayed.
    2. If you already deployed a CP4BA FileNet Content Manager instance in your cluster, then select Yes. The default is No.
    3. Select a new installation type.
    4. Select the starter deployment type.

      The size of the Cloud Pak foundational services instance is set to starterset.

    5. Select OpenShift Container Platform (OCP) - Private Cloud or RedHat OpenShift Kubernetes Service (ROKS) - Public Cloud.
    6. If you selected OpenShift Container Platform (OCP) - Private Cloud, then select Yes if your OCP is deployed on AWS or Azure else select No.
    7. Choose the capabilities and optional components that you want to install.
      1) FileNet Content Manager
      2) Operational Decision Manager
      3) Automation Decision Services
      4) Business Automation Application
      5) Business Automation Workflow and Automation Workstream Services
      6) Automation Document Processing

      Automation Document Processing (6) does not support a cluster with a Linux on Z (s390x) architecture.

      For more information about dependencies, see Capabilities for starter deployments.

      Tip: After you make a first selection, you can make more selections to combine multiple capabilities. Press [ENTER] to skip optional components and again when you are done.
    8. If IBM Content Collector for SAP is selected as an option, provide the URL to the ZIP file that contains the ICCSAP drivers.
    9. Enter the storage class for your dynamic storage provisioned (RWX).

      If you selected ROKS as the platform, provide three storage class names for slow, medium, and fast. You must select the storage class names that end with "gid". For example, ibmc-file-bronze-gid, ibmc-file-silver-gid, and ibmc-file-gold-gid. Select the gold file storage class (xxxx-file-gold-gid) for the fast dynamic storage class name. This class provides fast storage for some of the components such as Db2®, which can prevent unnecessary operator retries.

    10. A summary of your selection is displayed. Click "Yes" to verify that the information is correct.

Results

The operator reconciliation loop can take some time. You must verify that the automation containers are running.

Restriction: The deployment script generates a final CR file that is deployed. Any changes to the deployment must be done by running the script again. Customization of the generated CR file is not supported for a starter deployment, except when it is documented.
Note: A small Cloud Pak foundational services deployment is used. For more information about the sizing for foundational services, see Deployment profiles.

What to do next

To verify that your deployment is up and running:

  1. Monitor the status of your pods from the command line. Using the OpenShift CLI:
    oc get pods -w
  2. When all the pods have Conditions: Ready, PrereqReady, or Running, then you start to use your starter environment. You can access the status of your services with the following OCP CLI command.
    oc status
Note: In addition to the selected capabilities, Postgres and OpenLDAP are installed. The Postgres database is specific to the starter environment and is not supported in a production deployment. LDAP is used for the user registry and includes some sample users.
  1. Go to the cert-kubernetes directory on your local machine.
    cd cert-kubernetes
  2. Log in to the cluster with the non-administrator user. Using the OpenShift CLI.
    oc login -u cp4a-user -p cp4a-pwd
  3. Look for the status field of each capability by running the oc get command.
    oc get ICP4ACluster <instance_name> -o=jsonpath='{.status.components.<component_id>}'
    Note: If you selected "FileNet Content Manager" with no other capabilities, then the Kind parameter is set to Content instead of ICP4ACluster.
    oc get Content <instance_name> -o=jsonpath='{.status.components.<component_id>}'
    Where the <component_id> can be any of the following ids:
    status:
      components:
        ae-icp4adeploy-workspace-aae
        viewone
        gitgatewayService
        css
        adsMongo
        contentDesignerRepoAPI
        adsLtpaCreation
        adsCredentialsService
        workflow-authoring
        graphql
        adsRrRegistration
        adsRuntimeService
        ae-icp4adeploy-pbk
        app-engine
        contentProjectDeploymentService
        contentDesignerService
        adsGitService
        cmis
        adsParsingService
        bastudio
        ier
        adsRestApi
        adsBuildService
        navigator
        baw
        odm
        cpe
        iccsap
        tm
        adsFront
        adsRunService
        prereq
        adsRuntimeBaiRegistration
        resource-registry
        pfs
        adsDownloadService
        ca
        baml
  4. Get the access information and the user credentials that you need to log in to the web applications by running either of the following commands:
    oc get cm <instance_name>-cp4ba-access-info -o=jsonpath='{.data.<component_id>-access-info}'
    oc describe icp4acluster <instance_name> -n <namespace>
    Note: If you selected "FileNet Content Manager" with no other capabilities, then the Kind parameter is set to Content instead of ICP4ACluster.
    oc describe Content <instance_name> -n <namespace>
    Note: The bastudio-access-info section provides access information for the Cloud Pak dashboard (Zen UI) and Business Automation Studio, which is installed by several patterns. The included URLs and credentials can be used to access the applications designers of the installed components.
Tip: Run the post-installation script on your cluster to further validate your deployment. For more information, see Validating your starter deployment.

After you have the routes and admin user information, check to see whether you need to do the following tasks.

Log in to the IBM Cloud Pak Platform UI (Zen UI)

Business Automation Studio uses the Zen UI to provide a role-based user interface for all Cloud Pak capabilities. Capabilities are dynamically available in the UI based on the role of the user that logs in. The URL for the Zen UI is listed in the output of the post deployment script.

You have two options to log in, Enterprise LDAP and IBM provided credentials (cpadmin only). To log in to the Admin Hub to configure the LDAP, then click IBM provided credentials (cpadmin only). You can get the details for the IBM-provided cpadmin user by getting the contents of the platform-auth-idp-credentials secret in the namespace used for the CP4BA deployment.

oc -n <namespace> get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_password}' | base64 -d && echo

If you want to log in using the configured LDAP, then click Enterprise LDAP and enter the cp4admin user and the password in the cp4ba-access-info ConfigMap. The cp4admin user has access to Business Automation Studio features.

If you want to add more users, you need to log in with the Zen UI administrator. The kubeadmin user in the Red Hat OpenShift authentication and the IBM-provided cpadmin user have the Zen UI administrator role.

If you want to add more users, you need to log in with the Zen UI administrator. The kubeadmin user in the OpenShift authentication and the IBM provided admin user have the Zen UI administrator role. When logged in, you can add users to the Automation Developer role to enable users and user groups to access Business Automation Studio and work with business applications and business automations. For more information about adding users, see Completing post-deployment tasks for Business Automation Studio. For more information about the Automation Developer role, see Roles and permissions.

Note: If you included multiple capabilities from FileNet Content Manager (FNCM), Automation Document Processing (ADP), and Business Automation Application (BAA) in your CP4BA deployment, then use the Navigator for CP4BA heading in the cp4ba-access-info ConfigMap and the custom resource status fields to find the route URL for Business Automation Navigator.

If you included FileNet Content Manager (FNCM) without the other capabilities, then use the Navigator for FNCM heading in the cp4ba-access-info ConfigMap and the custom resource status fields to find the route URL for Business Automation Navigator.

Use the LDAP user registry

The LDAP server has a set of predefined users and groups to use with your starter environment. Changes to the user repository are not persisted after a pod restart. To log in and view the users, follow these steps.

  • Get the <deployment-name> of the deployment. The <deployment-name> is the name of the metadata.name parameter in the CR that you installed. Using the OpenShift CLI.
    oc get icp4acluster
    Note: If you selected "FileNet Content Manager" with no other capabilities, then the Kind parameter is set to Content instead of ICP4ACluster.
    oc get Content
  • The LDAP admin user is "cn=admin,dc=example,dc=org". To get the LDAP admin password for the LDAP admin user, run the following OpenShift CLI command.
    oc get secret <deployment-name>-openldap-secret -o jsonpath="{.data.LDAP_ADMIN_PASSWORD}" | base64 -d
  • You can review the predefined users and groups with the following command.
    oc get secret <deployment-name>-openldap-customldif -o yaml
    To provide a user for Task Manager, the following LDAP users and groups are created by the script.
    1. User names: cp4admin, user1, user2, up to and including user10.
    2. Group names: TaskAdmins, TaskUsers, and TaskAuditors.

    The cp4admin user is assigned to "TaskAdmins". The LDAP users user1 - user5 are assigned to "TaskUsers", and the users user6 - user10 are assigned to "TaskAuditors".

  • To modify an existing user's password:
    Note: Do not change the password of the cp4admin user after the Content Platform Engine (CPE) is initialized. Changing the password of the Domain admin user needs extra steps. For more information, see Update System User credentials.
    1. In the OpenShift console, go to Workloads > Secrets, and select the icp4adeploy-openldap-customldif secret.
    2. Click Actions > Edit Secret.
    3. Change the password for a specified user and click Save.
    4. Go to Workloads > Pods, search for the "openldap" pod.
    5. In the overflow menu for the pod, click Delete Pod to restart it.
  • To add a user:
    1. In the OpenShift console, go to Workloads > Secrets, and select the icp4adeploy-openldap-customldif secret.
    2. Click Actions > Edit Secret.
    3. Copy and paste the attributes from an existing user, take out the unnecessary attributes, put the information for the new user, and click Save. The following example is for the user, "newuser":
      dn: uid=newuser,dc=example,dc=org
      uid: newuser
      cn: newuser
      sn: newuser
      userPassword: <password>
      objectClass: top
      objectClass: posixAccount
      objectClass: organizationalPerson
      objectClass: inetOrgPerson
      objectClass: person
      uidNumber: 14583345
      gidNumber: 1456456
      homeDirectory: /home/newuser/
      mail: newuser@example.org 

      The uidNumber must be a unique and different number from the existing uidNumbers.

    4. Go to Workloads > Pods, search for the "openldap" pod.
    5. In the overflow menu for the pod, click Delete Pod to restart it.
    6. Sign in to the Common Web UI by following the steps in Accessing your cluster by using the console.
    7. Follow the steps in Managing console access to add the user to the Cloud Pak Platform UI (Zen).
  • To add a group:
    1. In the Red Hat OpenShift console, go to Workloads > Secrets, and select the icp4adeploy-openldap-customldif secret.
    2. Click Actions > Edit Secret.
    3. Copy and paste the attributes from an existing group, take out the unnecessary attributes, put the information for the new group, and click Save.

      The following example is for a group name of "NewGroup".

      dn: cn=NewGroup,dc=example,dc=org
      objectClass: groupOfNames
      objectClass: top
      cn: NewGroup
      member: uid=user1,dc=example,dc=org
      member: uid=user2,dc=example,dc=org
      member: uid=user3,dc=example,dc=org
      member: uid=user4,dc=example,dc=org
    4. Go to Workloads > Pods, and search for the openldap pod.
    5. In the overflow menu for the pod, click Delete Pod to restart it.
    6. Sign in to the Common Web UI by following the steps in Accessing your cluster by using the console.
    7. Follow the steps in Managing user groups to add the group to the Cloud Pak Platform UI (Zen).

Create a storage policy and associate the Advanced Storage Area that was created during the deployment

  1. Create a storage policy and associate the storage policy with the existing advanced storage area. See Storage policies for more information.
  2. Assign the newly created storage policy to an existing document class.

Enable GraphQL integrated development environments for FileNet® Content Manager

The GraphiQL integrated development environment is not enabled by default because of a security risk. If you want to include this capability in your starter environment, you can add the parameter to enable the IDE.

  1. Find the generated YAML file in the directory where you ran the deployment script. For example, generated-cr/ibm_cp4a_cr_final.yaml.
  2. Add the following parameter to the file:
    graphql:
          graphql_production_setting:
            enable_graph_iql: true
  3. Apply the updated custom resource YAML file.

    In the next reconciliation loop, the operator picks up the change, and includes GraphiQL with your deployment.

Import sample data for Business Automation Insights

If you selected Business Automation Insights as an optional component, then you can test and explore the component by importing sample data. For more information, see https://github.com/icp4a/bai-data-samples.

Enabling Business Automation Insights for FileNet Content Manager

If you selected Business Automation Insights as an optional component and included the Content Event Emitter in your deployment, you must update the deployment to add the Kafka certificate to the trusted certificate list.

  1. Create a secret with your Kafka certificate, for example:
    oc create secret generic eventstreamsecret --from-file=tls.crt=eventstream.crt
  2. Find the generated YAML file in the directory where you ran the deployment script. For example, generated-cr/ibm_cp4a_cr_final.yaml.
  3. Update the trusted_certificate_list parameter to include the secret that you created.
    shared_configuration:
          trusted_certificate_list: ['eventstreamsecret']

    If other certificates are in the list, use a comma to separate your new entry.

  4. Apply the updated custom resource YAML file.

Verify the creation of the CDD repository for Content Designer

If you installed FileNet Content Manager, use an ssh command line to go into the gitea-deploy pod to run the following command:

ls -l /data/git/repositories/content-designer/

If the output shows the cdd.git, then the content-designer directory exists and the Git repository is created successfully.

drwxr-xr-x 7 git git 147 May 5 15:58 cdd.git

If the output does not show the CDD repository, go to the operator logs to understand why the deployment failed.