Installing an Enterprise deployment in Operator Hub
Operator lifecycle manager (OLM) helps you to install, update, and manage the lifecycle of all operators and services that are deployed in OpenShift Container Platform (OCP) clusters.
Before you begin
- If you created an air gap environment, you must complete the steps in Preparing the operator and log file storage before you install the operator. In other cases, complete the steps in Preparing for an Enterprise deployment.
- You must then follow the relevant steps to prepare the patterns that you want to install. For more information, see Preparing capabilities.
- Log in to your OCP or ROKS cluster.
- In the Installed Operators view, verify the status of the IBM Cloud Pak for Business Automation operator installation reads succeeded, and verify the deployment by checking all of the pods are running.
About this task
Operator lifecycle manager is part of the Operator Framework, which is an open source toolkit that is designed to manage Kubernetes applications in an effective, automated, and scalable way.
IBM provides operators to OCP in the form of a catalog. The catalog is added to an OCP cluster and appears in the OCP Operator Hub under the IBM Operator Catalog provider type.
Procedure
Results
Check to make sure that the icp4ba
cartridge in the IBM Automation
Foundation Core is ready. For more information about IBM Automation Foundation, see What is IBM Automation foundation?
To view the status of
the icp4ba
cartridge in the OCP Admin console, click . Click the Cartridge tab, click icp4ba,
and then scroll to the Conditions section.
When the deployment is
successful, a ConfigMap is created in the CP4BA namespace (project) to provide the cluster-specific
details to access the services and applications. The ConfigMap name is prefixed with the deployment
name (default is icp4adeploy
). You can search for the routes with a filter on
"cp4ba-access-info
".
The contents of the ConfigMap depends on the components that are included. Each component has one or more URLs, and if needed a username and password. Each component has one or more URLs.
<component1> URL: <RouteUrlToAccessComponent1>
<component2> URL: <RouteUrlToAccessComponent2>
odmAdmin/odmAdmin
.- Decision Server Console:
<meta_name>-odm-ds-console-route
- Decision Runner:
<meta_name>-odm-dr-route
- Decision Center:
<meta_name>-odm-dc-route
- Decision Server Runtime:
<meta_name>-odm-ds-runtime-route
What to do next
When all of the containers are running, you can access the services.
Business Automation Studio leverages the IBM Cloud Pak Platform UI (Zen UI) to provide a role-based user interface for all Cloud Pak capabilities. Capabilities are dynamically available in the UI based on the role of the user that logs in. You can find the URL for the Zen UI by clicking cpd, or by running the following command.
and looking for the nameoc get route |grep "^cpd"
Log in to the Admin Hub to configure your LDAP with the Identity and Access Management (IAM) service. You have two authentication types that you can log in with: OpenShift authentication and IBM provided credentials (admin only). Use your kubeadmin username and credentials to log in with OpenShift authentication. On ROKS, you must use IBM provided credentials. The default username for these credentials is "admin". You can get the default username by running the following command:
oc -n ibm-common-services get secret platform-auth-idp-credentials \
-o jsonpath='{.data.admin_username}' | base64 -d && echo
You get the password by running the following command:
oc -n ibm-common-services get secret platform-auth-idp-credentials \
-o jsonpath='{.data.admin_password}' | base64 -d
You can change the default password at any time. For more information, see Changing the cluster administrator password.
You must then add users to the Automation Developer role to enable users and user groups to access Business Automation Studio and work with business applications and business automations. For more information, see Completing post-deployment tasks for Business Automation Studio.
To enable logs and monitoring add the wanted YAML to the CR in the YAML
view. For example, the following parameters provide custom setting for the content
pattern.
monitoring_configuration:
collectd_disable_host_monitoring: false
collectd_interval: 10
collectd_plugin_write_graphite_host: localhost
collectd_plugin_write_graphite_port: 2003
collectd_plugin_write_prometheus_port: 9103
mon_enable_plugin_mbean: true
mon_enable_plugin_pch: true
mon_metrics_writer_option: 4
logging_configuration:
mon_log_parse: true
mon_log_shipper_option: "1"
mon_log_service_endpoint: example.com:9200
private_logging_enabled: false
logging_type: default
mon_log_path: /path_to_extra_log
ecm_configuration:
cpe:
logging_enabled: true
monitor_enabled: true
css:
logging_enabled: true
monitor_enabled: true
graphql:
logging_enabled: true
monitor_enabled: true
cmis:
logging_enabled: true
monitor_enabled: true
es:
logging_enabled: true
monitor_enabled: true
Some capabilities need you to follow post-deployment steps. For more information, see Completing post-deployment tasks.