Recommended: Preparing databases and secrets for your chosen capabilities by running a script

The cp4a-prerequisites.sh script is provided in the cert-kubernetes repository to help you prepare for an installation of Cloud Pak for Business Automation. The script generates property files for the selected capabilities in your deployment and must be run before your deployment is installed.

Before you begin

Before you use the cp4a-prerequisites.sh script to generate the property files, make sure that you review the requirements for the capabilities that you want to install together with your target database. This information is normally found in the preparing sections for each capability, where you can find the steps to manually create the databases. Consider your intended workload and the number of users that you want to access the services. For operational and performance reasons, it is important that network latency between the applications and the database server is as small as possible. For deployments that need to operate continuously with no interruptions in service, enable the databases for high availability (HA).

For more information about the supported databases, see the Software Product Compatibility Reports External link opens a new window or tab.

Tip: Run the cp4a-prerequisites.sh script in the "property" mode to create the property files for your selected capabilities and database. Then, take a note of the properties in these files so that you can match up these values with the configuration of your database services.

The cp4a-prerequisites.sh script uses the following utility tools and needs them to be installed on your client machine.

Note: If the script detects any of the required tools are missing on the client, it reports the names and versions of the tools. It then provides a choice for you to install them.
  • kubectl (the version that matches your Red Hat® OpenShift® cluster version)

    If you prepared your client machine for an online deployment, then you kubectl is already installed. For more information, see Preparing a client to connect to the cluster.

  • Java™ runtime environment (JRE 8.x is installed by the script if it is not found)
    • Java
    • keytool – Make sure that you add the keytool to your system PATH.
  • OpenSSL External link opens a new window or tab

Create an environment variable to locate the target CP4BA namespace (cp4ba-project). Before you run the command, you must be logged in to your Red Hat OpenShift cluster.

export NAMESPACE=<cp4ba-project>
Note: If you deployed the operators in a separate namespace to the target CP4BA namespace, you must create the secrets in the CP4BA deployment namespace. For example, if you have cp4a-operators and cp4a-operands namespaces, then use the latter. Otherwise, use the name of the target CP4BA namespace.

About this task

Instead of going through the many documented steps to create the databases and secrets for the capabilities in your Cloud Pak for Business Automation deployment, you can use a script to generate the SQL statement files (scripts) and YAML template files for the secrets.

The cp4a-prerequisites.sh script has three modes.

property

The property mode supports the generation of property files for multiple database servers. The script uses a "DB_SERVER_LIST" key in the cp4ba_db_server.property file to list the number of instances, and creates the user property files (cp4ba_user_profile.property, cp4ba_db_name_user.property, cp4ba_db_server.property, and cp4ba_LDAP.property). Review and modify these files to match your infrastructure. Add values for the database server name, database names, database schema, LDAP server name, and LDAP attributes.

generate
The generate mode uses the modified property files to generate the DB SQL statement file and the YAML template for the secret.
validate
The validate mode checks whether the generated databases and the secrets are correct and ready to use in a CP4BA deployment.

After you downloaded cert-kubernetes, change the directory to the scripts folder under cert-kubernetes/scripts. For more information about downloading cert-kubernetes, see Preparing a client to connect to the cluster.

The script can be run from this location and has the following options:

Usage: cp4a-prerequisites.sh -m [modetype] -n [CP4BA_NAMESPACE]
Options:
  -h  Display help
  -m  The valid mode types are: [property], [generate], or [validate]
  -n  The target namespace of the CP4BA deployment.
      STEP1: Run the script in [property] mode. Creates property files (DB/LDAP property file) with default values (database name/user).
      STEP2: Modify the DB/LDAP/user property files with your values.
      STEP3: Run the script in [generate] mode. Generates the DB SQL statement files and YAML templates for the secrets based on the values in the property files.
      STEP4: Create the databases and secrets by using the modified DB SQL statement files and YAML templates for the secrets.
      STEP5: Run the script in [validate] mode. Checks whether the databases and the secrets are created before you install CP4BA. 

All three modes can be run on the same client machine, but you can also run the property and generate modes on different clients. If you want to use different clients, then copy the temporary property file from the property mode with the output folder to the other client. Make a copy of the following files and put them into the downloaded cert-kubernetes folder on the other client:

cert-kubernetes/scripts/.tmp/.TEMPORARY.property
cert-kubernetes/cp4ba-prerequisites/project/$NAMESPACE
Note: Some properties use an absolute path in their values. If you do copy the script to a different machine, make sure that the absolute paths are updated to match the location of the copied script.

The values of the following properties need to be modified after you copy the cp4ba-prerequisites folder to a different client.

********cp4ba_db_server.property*************
<DB_PREFIX_NAME>.DATABASE_SSL_CERT_FILE_FOLDER

********cp4ba_LDAP_server.property*************
LDAP_SSL_CERT_FILE_FOLDER

************cp4ba_user_profile.property******************
APP_ENGINE.SESSION_REDIS_SSL_CERT_FILE_FOLDER

If you ran the cp4a-prerequisites.sh -m generate command on the original client, you must run the command again after you modified the property files to re-create the SSL secret templates with the updated absolute paths.

Procedure

  1. Make sure that you downloaded the cert-kubernetes repository to a Linux® based machine (CentOS Stream/RHEL/MacOS) or a client to a Linux-based machine.
  2. Make sure that you are in the scripts folder under cert-kubernetes.
  3. Log in to the target cluster.

    Using the Red Hat OpenShift CLI:

    oc login https://<cluster-ip>:<port> -u <cluster-user> -p <password>

    On ROKS, if you are not already logged in:

    oc login --token=<token> --server=https://<cluster-ip>:<port>
  4. Run the script in the "property" mode.
    ./cp4a-prerequisites.sh -m property -n $NAMESPACE

    The $NAMESPACE is the target project for the CP4BA deployment.

    Follow the prompts in the command window to enter the required information.

    1. Select the Cloud Pak for Business Automation capabilities that you want to install.
      Restriction:
      • If you previously installed another component that contains Business Automation Studio, and you select Workflow Process Service Authoring, then you can continue to use the same database that you already configured for Business Automation Studio.
      • If you select Workflow Process Service Authoring by itself, then you must select PostgreSQL as the database type.
      • If you select Workflow Process Service Authoring and another component that contains Business Automation Studio, then you must select PostgreSQL as the database type.
      • If you select Workflow Process Service Authoring and ODM or FNCM or BAW runtime, then you must select PostgreSQL as the database type.
      Important: If you select "FileNet® Content Manager" with no other capabilities, then the script assumes that the CP4BA FileNet Content Manager operator (ibm-content-operator) is used. The custom resource in this case sets the Kind parameter to Content instead of ICP4ACluster.
      Kind: Content

      When you generate the custom resource with the cp4a-deployment.sh script, the custom resource file is named ibm_content_cr_final.yaml.

    2. Select the optional components that you want to include.
    3. Select Yes if you want to enable FIPS for your Cloud Pak for Business Automation deployment.
      Tip: The script asks this question only if you asked to check that FIPS is enabled on the cluster in the cluster admin script. The response is stored in the cp4ba-fips-status configMap.

      If you select Yes, the script creates the CP4BA.ENABLE_FIPS property in the cp4ba_user_profile.property file and records your selection.

      CP4BA.ENABLE_FIPS="true"

      If you select No, the value is stored as false.

      The property determines the value of the shared_configuration.enable_fips parameter in the custom resource.

    4. Choose the LDAP type that you want to use for the CP4BA deployment.

      By default, the LDAP is SSL enabled. You can disable SSL for the LDAP when you edit the LDAP property file. The script shows the following message:

      [*] You can change the property "LDAP_SSL_ENABLED" in the property file "cp4ba_LDAP.property" later. "LDAP_SSL_ENABLED" is "TRUE" by default.
    5. Enter your dynamic storage classes for slow, medium, fast file storage (RWX).
    6. Enter a block storage class name (RWO).
    7. Select a deployment profile size from small, medium, or large [1 to 3]. The default is small (1).
    8. Choose the database type that you want to use for the CP4BA deployment.
      Note: If you select EDB Postgres, the CP4BA operator creates and initializes the database instances.

      If you select a different database type and it is not an external PostgreSQL, the CP4BA operator creates and initializes EDB Postgres databases for Document Processing and Automation Decision Services. You do not need to complete anything in the property files for these instances.

      The script sets the following fields in the cp4ba_db_server.property file.

      • postgresql-edb.DATABASE_SERVERNAME="postgres-cp4ba-rw.{{ meta.namespace }}.svc"
      • postgresql-edb.DATABASE_PORT="5432"
      • postgresql-edb.DATABASE_SSL_SECRET_NAME="{{ meta.name }}-pg-client-cert-secret"

      The CP4BA operator creates a cluster custom resource (CR) so that the EDB Postgres operator can create the EDB Postgres instance. The CP4BA operator generates the secret "{{ meta.name }}-pg-client-cert-secret" based on the root CA, which is set in the shared_configuration.root_ca_secret parameter.

      The script also generates the cp4ba_db_name_user.property file, which is used to define the database server name, username, and password. The script uses this property file to create the data source section in the CP4BA CR and generate the secret templates. If the database names are not specified in the CR, the operator uses the default database names for each data source that uses the EDB Postgres instance. The same applies to the username and password for each database. If the username and password exists in the secret for each component, then the operator creates the user in the EDB Postgres instance with the password that is specified in the secret. If the username or password, or both, do not exist in the secret for a component, then a default username and password are used for that database. If the operator uses the default username and password, it also updates the secrets for each component.

      After the EDB Postgres operator is created, the CP4BA operator does not manage the EDB Postgres instance and does not change it in any way. Your system administrator can manage the EDB cluster instance by following the EDB Postgres documentation External link opens a new window or tab. For more information about backing up the EDB Postgres instance, see Backing up EDB Postgres.

      By default, the databases are SSL enabled. You can disable SSL for a database when you edit the database property file. The script shows the following message:

      [*] You can change the property "DATABASE_SSL_ENABLE" in the property file "cp4ba_db_server.property" later. "DATABASE_SSL_ENABLE" is "TRUE" by default.
      Restriction: If you want to use Oracle or another custom database (a database that is not in the list) for Operational Decision Manager, then you must follow the steps to manually create the database and the secret for the odm_configuration.externalCustomDatabase.datasourceRef custom resource parameter. For more information, see Configuring a custom external database.
    9. If required: If you selected a database type other than EDB Postgres, then enter the alias names for all the database servers to be used by the CP4BA deployment. For example, dbserver1,dbserver2 sets two database servers. The first server is named dbserver1 and the second server is named dbserver2.
      Note: The database server names cannot contain a dot (.) character.
    10. If required: If you selected FileNet Content Manager, then enter the number of object stores of a FileNet P8 domain to configure for the CP4BA deployment.
    11. If you chose an external PostgreSQL database type for your deployment, you can also choose to use the database for Platform UI (Zen) and Identity Management (IM) as the metastore DB. Collect the relevant database information so you can enter the required values in the generated property files. For more information, see Setting up an external PostgreSQL database server for IM External link opens a new window or tab and Configuring an external PostgreSQL database for Zen External link opens a new window or tab.
      Remember: Zen stores users, groups, service instances, vault integration, and secret references in the metastore DB.
      Note: If you select Yes to choose an external PostgreSQL DB, the script generates properties with the prefix CP4BA.ZEN_EXTERNAL_POSTGRES_DATABASE and CP4BA.IM_EXTERNAL_POSTGRES_DATABASE in the cp4ba_user_profile.property file.

      If you chose any other type of database, then the CP4BA operator creates and initializes the embedded cloud-native EDB Postgres database instances.

    12. Choose whether to use an external TLS certificate for the OpenSearch and Kafka deployments. The OpenSearch and Kafka operators can be configured to use an external TLS certificate instead of the default root CA. Copy the certificates into the dedicated folder for your database (cert-kubernetes/scripts/cp4ba-prerequisites/project/$NAMESPACE/propertyfile/cert/db/<database folder>).

      If you select No, the CP4BA operator creates the leaf certificates based on the default root CA. The CP4BA operator checks the namespace for the cp4ba-tls-issuer resource, and if it exists, it then creates the necessary certificates for Kafka and OpenSearch.

    When the script is finished, the messages include some actions. Read the next actions carefully and make sure that you complete them all before you go to the next step.

    ============== Created all property files for CP4BA ==============
    [NEXT ACTIONS]
    Enter the <Required> values in the property files under <extracted_cert-kubernetes_archive>/cert-kubernetes/scripts/cp4ba-prerequisites/project/<CP4BA_NAMESPACE>/propertyfile
    [*] The key name in the property file is created by the cp4a-prerequisites.sh and is NOT EDITABLE.
    [*] The value in the property file must be within double quotes.
    [*] The value for User/Password in [cp4ba_db_name_user.property] [cp4ba_user_profile.property] file should NOT include special characters: single quotation "'"
    [*] The value in [cp4ba_LDAP.property] or [cp4ba_External_LDAP.property] [cp4ba_user_profile.property] file should NOT include special character '"'

    The propertyfile directory has the following file structure:

    ├── cert
        ├── db
            └── <db_server_alias1>
            └── ...
        └── ldap
    ├── cp4ba_LDAP.property
    ├── cp4ba_db_name_user.property
    ├── cp4ba_db_server.property
    └── cp4ba_user_profile.property
    Note: The db directory contains one or more db server alias names that you specified.

    If you plan to enable SSL-based connections for your database or LDAP servers, you must copy the SSL certificate from your remote server to the propertyfile folder structure.

    • The SSL certificate for the LDAP server must be named ldap-cert.crt and copied to the folder cp4ba-prerequisites/project/<cp4ba_namespace>/propertyfile/cert/ldap.
    • If you plan to enable SSL-based connections for your PostgreSQL database server, use the following guidance.
      • If you use both server and client authentication, retrieve the server certificate, client certificate, and client private key from your database server. Copy them into the folder cp4ba-prerequisites/project/<cp4ba_namespace>/propertyfile/cert/db/<DB_ALIAS_NAME>. The files must be named root.crt, client.crt, and client.key.
      • If you use server-only authentication, retrieve the server certificate from your database server and copy it into the folder cp4ba-prerequisites/project/<cp4ba_namespace>/propertyfile/cert/db/<DB_ALIAS_NAME>. The certificate must be named db-cert.crt.
    • If you plan to enable SSL-based connections for any other database server type, retrieve the server certificate file from your remote database server and copy it into the folder cert-kubernetes/scripts/cp4ba-prerequisites/project/prod/propertyfile/cert/db/<DB_ALIAS_NAME>. The certificate must be named db-cert.crt.
  5. Make sure that you are in the propertyfile folder under cp4ba-prerequisites/project/$NAMESPACE and edit the property files as indicated by the NEXT ACTIONS messages from the script. Update the (cp4ba_db_name_user.property, cp4ba_db_server.property, cp4ba_LDAP.property, cp4ba_user_profile.property, and optionally cp4ba_External_LDAP.property) with the values in your environment.
    Important: Use the {Base64} prefix for all passwords that have special characters. The following example shows that the {Base64} prefix must be used and the password must be encoded when it has a special character.
    LDAP_BIND_DN_PASSWORD="{Base64}UGFzc3dvcmQk"  # Which is "Password$" when the Base64 string is decoded.
    LTPA_PASSWORD="{Base64}UGFzc3dvcmQk"  # Which is "Password$" when the Base64 string is decoded.
    KEYSTORE_PASSWORD="{Base64}UGFzc3dvcmQk"  # Which is "Password$" when the Base64 string is decoded.
    1. Edit the global section in the cp4ba_user_profile.property file, and then the other sections for each capability that you selected.

      The global section contains license properties and the needed storage classes. The FIPS enablement property and the egress property to generate network policies are also present. The BAN section is always included, and the users and groups must be from your LDAP.

      ####################################################
      ## USER Property for CP4BA ##
      ####################################################
      ## Use this parameter to specify the license for the CP4A deployment and
      ## the possible values are: non-production and production and if not set, the license will
      ## be defaulted to production.  This value could be different from the other licenses in the CR.
      CP4BA.CP4BA_LICENSE="<Required>"
      ## On OCP, the script populates these three (3) parameters based on your input for "production" deployment.
      ## The script populates the storage parameters based on your input.
      ## If you manually deploy without using the deployment script, then you must enter the different storage classes for the slow, medium,
      ## and fast storage parameters. If you only have 1 storage class defined, then you can use that 1 storage class for all 3 parameters.
      ## The sc_block_storage_classname is for Zen. Zen requires block storage (RWO) for the metastore DB.
      CP4BA.SLOW_FILE_STORAGE_CLASSNAME="<my_file_classname>"
      CP4BA.MEDIUM_FILE_STORAGE_CLASSNAME="<my_file_classname>"
      CP4BA.FAST_FILE_STORAGE_CLASSNAME="<my_file_classname>"
      CP4BA.BLOCK_STORAGE_CLASS_NAME="<my_block_classname>"
      
      ## Enable/disable FIPS mode for the deployment (default value is "false").
      CP4BA.ENABLE_FIPS="false"
      
      ## Enable or disable the generation of network policies.
      CP4BA.ENABLE_GENERATE_SAMPLE_NETWORK_POLICIES="true"
      
      ####################################################
      ## USER Property for BAN ##
      ####################################################
      ## Provide the user name for BAN. For example: "BANAdmin"
      BAN.APPLOGIN_USER="<Required>"
      ## Provide the user password for BAN.
      BAN.APPLOGIN_PASSWORD="<Required>"
      ## Provide LTPA key password for BAN deployment.
      BAN.LTPA_PASSWORD="<Required>"
      ## Provide keystore password for BAN deployment.
      BAN.KEYSTORE_PASSWORD="<Required>"
      ## Provide the user name for jMail used by BAN. For example: "jMailAdmin"
      BAN.JMAIL_USER_NAME="<Optional>"
      ## Provide the user password for jMail used by BAN.
      BAN.JMAIL_USER_PASSWORD="<Optional>"

      Some values like a connection point name and table spaces for the Workflow object store initialization of Content can be any string, but are needed for the deployment to identify them.

      Important: The passwords that are used for CONTENT.LTPA_PASSWORD and BAN.LTPA_PASSWORD must be the same in the cp4ba_user_profile.property file.
    2. Follow the instructions in the cp4ba_user_profile.property file to enter all the <Required> values for the external PostgreSQL DB.

      ## Please get "<your-server-certification: root.crt>" "<your-client-certification: client.crt>" "<your-client-key: client.key>" 
      from server and client, and copy into this directory.
      Default value is "/home/cert-kubernetes/scripts/cp4ba-prerequisites/project/<CP4BA_NAMESPACE>/propertyfile/cert/zen_external_db".
      CP4BA.ZEN_EXTERNAL_POSTGRES_DATABASE_SSL_CERT_FILE_FOLDER="/home/cert-kubernetes/scripts/cp4ba-prerequisites/project/<CP4BA_NAMESPACE>/propertyfile/cert/zen_external_db"
      ## Name of the schema to store monitoring data. The default value is "watchdog".
      CP4BA.ZEN_EXTERNAL_POSTGRES_DATABASE_MONITORING_SCHEMA="watchdog"
      ## Name of the database. The default value is "zencnpdb".
      CP4BA.ZEN_EXTERNAL_POSTGRES_DATABASE_NAME="zencnpdb"
      ## Database port number. The default value is "5432".
      CP4BA.ZEN_EXTERNAL_POSTGRES_DATABASE_PORT="5432"
      ## Name of the read database host cloud-native-postgresql on k8s provides this endpoint. If DB is not running on k8s then same hostname as DB host.
      CP4BA.ZEN_EXTERNAL_POSTGRES_DATABASE_R_ENDPOINT="<Required>"
      ## Name of the database host.
      CP4BA.ZEN_EXTERNAL_POSTGRES_DATABASE_RW_ENDPOINT="<Required>"
      ## Name of the schema to store zen metadata. The default value is "public".
      CP4BA.ZEN_EXTERNAL_POSTGRES_DATABASE_SCHEMA="public"
      ## Name of the database user. The default value is "zencnp_user".
      CP4BA.ZEN_EXTERNAL_POSTGRES_DATABASE_USER="zencnp_user"
    3. If you entered a single database server, then the DB_ALIAS_NAME is set automatically. If you need more than one server, edit the <DB_ALIAS_NAME> property prefixes in the cp4ba_db_name_user.property file to assign each component to a database server name defined in the DB_SERVER_LIST key name in the cp4ba_db_server.property file. The value of a <DB_ALIAS_NAME> in the username property file must match a value that is defined in the DB_SERVER_LIST.

      Note: If you see a section that is commented out with a single hash (#), verify that the capability is enabled and then uncomment these parameter lines. Enter the correct values for the username and password.

      The following example shows the commented lines for the Case History database parameters.

      ## Provide the name of the database for Case History when Case History Emitter is enabled. For example: "CHOS"
      # <DB_ALIAS_NAME>.CHOS_DB_NAME="CHOS"
      ## Provide database schema name. This parameter is optional. If not set, the schema name is the same as database user name.
      ## For DB2, the schema name is case-sensitive, and must be specified in uppercase characters.
      # <DB_ALIAS_NAME>.CHOS_DB_CURRENT_SCHEMA="<Optional>"
      ## Provide the user name for the object store database required by Case History when Case History Emitter is enabled. For example: "dbuser1"
      # <DB_ALIAS_NAME>.CHOS_DB_USER_NAME="<youruser1>"
      ## Provide the password (if password has special characters then Base64 encoded with {Base64} prefix, otherwise use plain text) for the user of Object Store of P8Domain.
      # <DB_ALIAS_NAME>.CHOS_DB_USER_PASSWORD="{Base64}<yourpassword>"
    4. Enter the required values for the LDAP variables in the cp4ba_LDAP.property file.

      Replace the <Required> string with your existing LDAP server parameters, its query objects, users, and groups.

      Important: If your target platform is ROKS Virtual Private Cloud (VPC), you can validate the connection to your LDAP only by using a VM client of the ROKS VPC. Set the LDAP server to the internal IP address or DNS of the ROKS VPC. For example, if the IP address of your LDAP is 10.240.0.16, then change the LDAP_SERVER property in the cp4ba_LDAP.property file to this address.
      ## The name of the LDAP server to connect
      LDAP_SERVER="10.240.0.16"

      If your client is not connected to the ROKS VPC, you can still set the IP address to propagate the value to the custom resource.

    5. All names and passwords in the cp4ba_db_name_user.property file must be entered manually.

      Replace the <yourpassword> strings with your database user passwords.

      Restriction: The username values cannot contain special characters. Special characters include the equal sign (=), a forward slash (/), a colon (:), a single dot (.), single quotation marks ('), and double quotation marks ("). If a value does contain a special character, the script fails to parse the value.

      The password can contain special characters, except the single quotation mark ('). The single quotation is used to enclose the string that contains special characters. If you have a password without special characters use the string as plain text. For example, if you want the password to be mypassword, specify the password as "mypassword".

      dbserver1.GCD_DB_USER_NAME="GCDDB"
      dbserver1.GCD_DB_USER_PASSWORD="mypassword"

      If you have a password with special characters (&passw0rd), you must encode this string and add {Base64} before you add the value to the property. To encode the password on Linux, run the following command:

      # echo -n '&passw0rd' | base64
      JnBhc3N3MHJk

      Add the encoded value to the property with the {Base64} prefix.

      dbserver1.GCD_DB_USER_NAME="GCDDB"
      dbserver1.GCD_DB_USER_PASSWORD="{Base64}JnBhc3N3MHJk"
  6. When the user property files are complete and ready, make sure that you are in the scripts folder under cert-kubernetes, and run the cp4a-prerequisites.sh script in the "generate" mode.
    ./cp4a-prerequisites.sh -m generate -n $NAMESPACE
    Note: If the script detects that the property files do not have custom values, the script stops and displays messages to help identify the missing values:
    Change the prefix "<DB_ALIAS_NAME>" in propertyfile/cp4ba_db_name_user.property to assign which database is used by the component.
    Found invalid value(s) "<Required>" in property file "propertyfile/cp4ba_db_name_user.property". Enter the correct value.

    The following messages are displayed at the end of the execution:

    [INFO] The DB SQL statement files for CP4BA are in directory <extracted_cert-kubernetes_archive>/cert-kubernetes/scripts/cp4ba-prerequisites/project/<CP4BA_NAMESPACE>/dbscript, you can modify or use the default settings to create the database. (DO NOT CHANGE DBNAME/DBUSER/DBPASSWORD DIRECTLY)
    [NEXT ACTIONS]
    Enter the correct values in the YAML templates for the secrets under <extracted_cert-kubernetes_archive>/cert-kubernetes/scripts/cp4ba-prerequisites/project/<CP4BA_NAMESPACE>/secret_template
    ...

    The /cp4ba-prerequisites/project/<CP4BA_NAMESPACE> directory has the following structure and varies depending on the capabilities that you selected when you ran the script:

    ├── create_secret.sh
    ├── dbscript
        ├── <component>
            ├── <database_type>
                ├── <db_server_alias>
                    └── <sql_template>
    ├── propertyfile
    ├── secret_template
        ├── <component>
            └── <yaml_secret> 
        └── ibm-ldap-bind-secret.yaml
    Attention: The cp4a-prerequisites.sh script generates the secret templates (ibm-fncm-secret, ibm-ban-secret, and ibm-ldap-bind-secret) with the cp4ba.ibm.com/backup-type=mandatory label. The label helps to backup the secrets and includes them as part of your disaster recovery strategy.

    If you create other secrets that contain TLS/SSL certificates for external services and add them to the trusted_certificate_list CR parameter or you replace the default secret name icp4a-root-ca in the root_ca_secret CR parameter, then you must add the cp4ba.ibm.com/backup-type=mandatory label to these secrets.

    Run the following commands to add the label to your secrets.

    oc label -f - cp4ba.ibm.com/backup-type=mandatory --local -o yaml > my_secret.yaml
    oc apply -f my_secret.yaml

    If you chose an external PostgreSQL for Zen and IM in the property mode, then the generate mode creates the following files under the propertyfile folder.

    • zen_external_db/ibm-zen-metastore-edb-cm.yaml: Use the generated YAML file to create a configMap.
    • im_external_db/ibm-im-metastore-edb-cm.yaml: Use the generated YAML file to create a configMap.
    • zen_external_db/ibm-zen-metastore-edb-secret.sh: Use the generated script to create the secret.
    • im_external_db/ibm-im-metastore-edb-secret.sh: Use the generated script to create the secret.
  7. Check that you have all the necessary files for your CP4BA deployment. Copy the required database and LDAP certificates into the target directories as indicated by the NEXT ACTIONS messages from the script. Make sure that the scripts and the YAML files have the correct values.

    For more information, see Importing the certificate of an external service.

    Note: If you selected Operational Decision Manager, the chosen database tables are automatically created when the pods are started. Therefore, the createODMDB.sql script does not include these commands.
  8. If required: If you selected a database type other than EDB Postgres, you or a database administrator must run the DB scripts against your database servers and use the YAML files to create the necessary secrets in your OpenShift Container Platform cluster.
    Remember: The target databases must meet the requirements of the capabilities that you want to install. Review the preparing sections for each capability before you go ahead and create the databases. For more information, see Preparing your chosen capabilities.

    For example, consider your intended workload when you configure the database services. If your deployment includes FileNet Content Manager, then review the important configuration tasks for the external database services under Preparing the databases.

    • For an external PostgreSQL database, the psql command line interface can be used to run the SQL scripts and manage the database configuration. The databases can be created by running the following command with your PostgreSQL instance user.
      psql -U "$DB_USER" -f "$filename"

      $DB_USER is the PostgreSQL user, and $filename is the path to your SQL file.

    • If you chose the Db2® type for your database, run the SQL files to create your database by using the db2 command line interface. The $filename is the path to your SQL file.
      db2 -tvf "$filename"
    • Microsoft SQL Server (MSSQL) provides a more interactive UI, and therefore, you can open the SQL files with SQL Server Studio on your MSSQL machine, and then run it by clicking execute to create the databases.
    Tip: If you configured your deployment to use RDS Db2 (dc_database_type=db2rds or dc_database_type=db2rdsHADR), the generated SQL script must be run as a stored procedure with administrator user credentials to create the database. Leave enough time in between the SQL statements for previous commands to complete. A few minutes are often needed to create the RDS Db2 database.
    CALL rdsadmin.create_database('GCDDB',32768,'UTF-8','US');

    After the database is created, you can then run further stored procedures if you want. For example, stored procedures can be used to create the buffer pool, table spaces, and users.

    -- Create buffer pool
    CALL rdsadmin.create_bufferpool('GCDDB','GCDDB_1_32K',1024,'Y','Y',32768,0,32);
    CALL rdsadmin.create_bufferpool('GCDDB','GCDDB_2_32K',1024,'Y','Y',32768,0,32);
    -- Create table spaces
    CALL rdsadmin.create_tablespace('GCDDB','GCDDATA_TS','GCDDB_1_32K',32768,NULL,NULL,'U','AUTOMATIC');
    CALL rdsadmin.create_tablespace('GCDDB','GCDDB_TMP_TBS','GCDDB_2_32K',32768,NULL,NULL,'T','AUTOMATIC');
    -- Create user
    CALL rdsadmin.create_role('GCDDB','FNCM');
    CALL rdsadmin.add_user('gcduser','smartpasswordgoeshere',null);
    CALL rdsadmin.grant_role(?,'GCDDB','FNCM','USER gcduser','N');
    CALL rdsadmin.update_db_param('GCDDB','LOCKTIMEOUT','30'); 

    If you create users, run the GRANT permissions statement for all the users of the database. In the following example, the db admin must connect to the database (GCDDB) to run the SQL statements.

    -- Grant permissions to DB user
    GRANT CREATETAB,CONNECT ON DATABASE TO USER gcduser;
    GRANT USE OF TABLESPACE GCDDATA_TS TO USER gcduser;
    GRANT USE OF TABLESPACE GCDDB_TMP_TBS TO USER gcduser;
    GRANT SELECT ON SYSIBM.SYSVERSIONS TO USER gcduser;
    GRANT SELECT ON SYSCAT.DATATYPES TO USER gcduser;
    GRANT SELECT ON SYSCAT.INDEXES TO USER gcduser;
    GRANT SELECT ON SYSIBM.SYSDUMMY1 TO USER gcduser;
    GRANT USAGE ON WORKLOAD SYSDEFAULTUSERWORKLOAD TO USER gcduser;
    GRANT IMPLICIT_SCHEMA ON DATABASE TO USER gcduser;
    -- GRANT USAGE ON SCHEMA RDSADMIN TO gcduser;
    GRANT SQLADM ON DATABASE TO USER gcduser;
    GRANT EXECUTE ON PACKAGE NULLID.SYSSN200 TO USER gcduser;
    GRANT EXECUTE ON PACKAGE NULLID.SYSSH200 TO USER gcduser;

    For more information about RDS Db2, see Amazon RDS for Db2 External link opens a new window or tab. For more information about stored procedures, see Amazon RDS for Db2 stored procedure reference External link opens a new window or tab.

    When you set up an RDS Db2 instance for your Cloud Pak for Business Automation deployments, use an Amazon Route 53 to configure a custom DNS name. A Route 53 creates a consistent endpoint for your CP4BA data source configurations, and introduces stability across database operations. For more information, see AWS Route 53 Developer Guide External link opens a new window or tab.

    1. Run the following command to go to the target CP4BA namespace.
      oc project $NAMESPACE
    2. Make sure that you are in the scripts folder under cert-kubernetes and run the create_secret.sh script to apply all the YAML template files that you generated.
      Important: The generated secrets include data and stringData sections in the YAML files. The data sections require values to be Base64 encoded. The stringData sections require values in plain text. If you need to modify any password in the secret template files before you create the secret in your cluster, then make sure that the updated fields are in the right format.
      • If an updated value is in the data: section of a file, it must be Base64 encoded. A Base64 encoded value can be retrieved by running the following command.
        echo "<value-to-update>" | base64
      • If an updated value is in the stringData: section of a file, it can be in plain text.
      ./cp4ba-prerequisites/project/$NAMESPACE/create_secret.sh
  9. Optional: Before you validate your database and LDAP connections, you can set values for your language and country. If no values are set, English (-Duser.language=en) is set as the language, and the United States (-Duser.country=US) is set as the country.

    Run the export command to set the values for a language and country as environment variables before you run the cp4a-prerequisites.sh script in the "validate" mode. The following variables set the language to Hindi and the country to India.

    export CP4BA_AUTO_LANGUAGE="HI"
    export CP4BA_AUTO_REGION="IN"
  10. Optional: When all the required databases and secrets are created, make sure that you are in the scripts folder under cert-kubernetes, and run the cp4a-prerequisites.sh script again in the "validate" mode.
    ./cp4a-prerequisites.sh -m validate -n $NAMESPACE

    The command validates that the storage classes that you entered in the property mode meet the RWX and RWO requirements. If the validation is successful, it is marked as PASSED!

    The command also checks that the required secrets are found and submits a validation query to the LDAP server and the list of remote database servers. If you chose an external Postgres DB used for the Zen metastore, then the connection is also checked. If the operations succeed within the timeout threshold, the validation is marked as PASSED! No queries are run and no data is changed, the script just reports that the connection succeeded.

    If a connection is not successful, then a message informs you which connection failed. To resolve the issue, check the values in your property files so that you can correct them and try again.

    Note: The cp4a-prerequisites.sh -m validate command uses a simple JDBC method to test the connection to the remote database servers with the Cloud Pak for Business Automation default JDBC drivers.

    If you need to use customized JDBC drivers in your production deployment, you can locate these drivers by using the sc_drivers_url parameter during the configuration of the custom resource. For more information, see Optional: Preparing customized versions of JDBC drivers and ICCSAP libraries.

Results

Tip: You can change (add or remove) the selected CP4BA capabilities that you want to prepare for by rerunning the script and merging the new property files with the backed-up property files.

You can rerun the script in the "property" mode to create new property files. When the script detects it ran before, the previous property folder is renamed into a new time-stamped folder. The name of the backed-up folder is cert-kubernetes/scripts/cp4ba-prerequisites-backup/project/$NAMESPACE/propertyfile_%Y-%m-%d-%H:%M:%S.

Use the following steps to update your property files to include your updated capabilities:

  1. Copy the file .tmp/.TEMPORARY.property into a back up file, for example .TEMPORARY.property.backup.
  2. Rerun the cp4a-prerequisites.sh script in the "property" mode, and choose a different selection of capabilities.
  3. Restore the cp4ba_LDAP.property and cp4ba_External_LDAP.property files from the backup folder by copying and pasting them into the new folder.
  4. Compare the cp4ba_db_server.property file from the backup folder and merge it where necessary with the new cp4ba_db_server.property file.
  5. Merge the new cp4ba_db_name_user.property and cp4ba_user_profile.property files with the backed-up property files.
  6. Rerun the cp4a-prerequisites.sh script in the "generate" mode to update the database SQL statements and YAML templates for the secrets.
  7. Compare and merge the .TEMPORARY.property.backup file with the .tmp/.TEMPORARY.property file for the new capabilities.
  8. Run the database SQL statements for the new capabilities.
  9. Create the secrets for the new capabilities.

If you already installed a CP4BA deployment and want to update it with the new databases and secrets for the new capabilities, you must run the cp4a-deployment.sh again to update the custom resource. Do not forget to verify the custom resource YAML before you scale down the deployment, apply the new custom resource with the --overwrite=true parameter, and scale the deployment back up. For more information, see Applying the upgraded custom resource.

What to do next

The next task to complete depends on the capabilities that you selected for your deployment. Prepare all these capabilities and any dependencies. Go to the next task Optional: Preparing to monitor your containers or jump to the capability in the table of contents or from Preparing your chosen capabilities.

Remember: When you run the cp4a-deployment.sh script from the location that outputs the cp4ba-prerequisites folder, the values that are defined in your property files are used in the custom resource file.