To install the Cloud Pak capabilities with the Cloud Pak operator, a cluster
administrator user can run a script to set up the cluster. The administrator must also provide
information that they get from the script to a non-administrator user so they can run the deployment
script.
Before you begin
Important: Before you use the All namespaces on the cluster option,
check the openshift-operators namespace to find installed operators. The
openshift-operators namespace is watched by the Operator Lifecycle Manager (OLM).
If Automation foundation operators are already installed by another Cloud Pak, then you must install
Cloud Pak for Business Automation in
All namespaces on the cluster.
About this task
The cluster setup script is one of three core scripts (cluster setup, deployment, and
post-deployment) that are provided to help you install the Cloud Pak capabilities. You must be a
cluster administrator to run the setup script. For more information, see Targeted role-based user
archetypes.
Note: The cluster setup script installs a set of Cloud Platform Foundation Services for the Cloud
Pak.
If the IBM operator catalog already appears in your OperatorHub because
you installed foundational services or a starter deployment, then you do not need to complete all of
the following steps. To check whether the catalog is installed, see What to do next.
The cluster setup script identifies or creates a namespace and applies the custom resource
definitions (CRD). It then adds the specified user to the ibm-cp4a-operator role,
binds the role to the service account, and applies a security context constraint (SCC) for the Cloud
Pak.
The script also prompts the administrator to take note of the cluster hostname and a dynamic
storage class on the cluster. These names must be provided to the user who runs the deployment
script.
Note: The scripts can be used only on Red Hat (RHEL), CentOS, or a client to a Linux-based machine
or virtual machine that can run Podman. The setup script does not set any parameters in the custom
resource (CR). The cluster administrator might be running the script on a different host than the
user who later runs the deployment script.
Use the following steps to complete the setup.
Procedure
-
Download the appropriate repository to a Linux® based
machine (RHEL, CentOS, and so on) or a client to a linux-based machine or VM that runs podman natively, and go to the
cert-kubernetes directory.
-
Log in to the target cluster as the
<cluster-admin> user.
Using the OpenShift CLI:
oc login https://<cluster-ip>:<port> -u <cluster-admin> -p <password>
On ROKS, if you are not already logged in:
oc login --token=<token> --server=https://<cluster-ip>:<port>
-
Run the cluster setup script from where you downloaded the cert-kubernetes
repository, and follow the prompts in the command window.
cd scripts
./cp4a-clusteradmin-setup.sh
- From 21.0.3-IF033 Select the
CP4BA deployment environment: Online (1) / Offline or
Airgap (2). Select Online.
- Select the platform type: ROKS (1) or OCP (2).
- Select the deployment type production.
- Select
Yes if you want to install the CP4BA operator in 'All Namespaces'. The
default is No.
- Enter the name for a new project or an existing project (cp4ba-project) for
the target deployment namespace. For more information, see Preparing storage for the Cloud Pak operator.
If an existing CP4BA operator is found in another project on your cluster, confirm that you want
to deploy another CP4BA operator in the new project by entering Yes. You must
install a CP4BA operator in each namespace where you want to install a CP4BA instance.
- Enter Yes or No to confirm whether you want to use
the images in the IBM Entitlement Registry.
- If you replied Yes, enter your IBM Entitled Registry key and login
credentials (user and password).
If you want to load the container images to a local registry,
then set up the cluster by mirroring the images instead of running the
cp4a-clusteradmin-setup.sh script. For more information, see Setting up the cluster by mirroring the container images.
Tip: If you ran the
cp4a-clusteradmin-setup.sh script and you see one or more of the following
messages, then make sure that you start Docker or Podman and run the script
again.
Error saving credentials: error storing credentials
Error: unable to connect
The Entitlement Registry key failed
- Enter a dynamic storage class name. On ROKS/ROKS VPC, use an appropriate storage
class.
Note: From 21.0.3-IF008, the script no longer creates the 3 storage classes:
ibmc-file-bronze-gid, ibmc-file-silver-gid, and
ibmc-file-gold-gid.
Note: The following message appears on OCP 4.6, but the warning does not have any functional
impact.
Creating the custom resource definition (CRD) and a service account that has the permissions to manage the resources...
W1102 26405 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
-
Monitor the operator pods until they show a STATUS of "Running".
oc get pod -w
Tip: If
ibm-cp4a-operator is inactive for some time, you can delete
the operator pod and let it reconcile.
To confirm that the operator is stuck, check to see
whether the log is providing an output.
oc project <namespace of Cloud Pak for Business Automation operator>
NAMESPACE=$(oc project -q)
podname=$(oc get pod -n $NAMESPACE | grep ibm-cp4a-operator | awk '{print $1}')
oc logs $podname -f
You can also list the ClusterServiceVersion (CSV) to verify the version of the
running operators on your cluster.
oc get csv
-
Add JDBC drivers to the operator pod for Business Automation Navigator and all of the other
patterns in your deployment that need them.
Copy all of the JDBC drivers that are needed by the components to the operator pod. Depending on
your storage configuration, you might not need all these drivers. For more information about
compatible JDBC drivers, see Db2 JDBC information, Oracle JDBC information, SQL Server JDBC information, and PostgreSQL JDBC information. The following .jar files are
examples.
The following structure shows an example remote file system. The jdbc
directory and the subfolder name for the database must be created.
/root/operator
└── jdbc
├── db2
├── db2jcc4.jar
└── db2jcc_license_cu.jar
├── oracle
├── ojdbc8.jar
└── orai18n.jar
├── sqlserver
└── mssql-jdbc-8.2.2.jre8.jar
├── postgresql
└── postgresql-42.2.9.jar
Copy the JDBC files to the operator
pod.
oc project <namespace of Cloud Pak for Business Automation operator>
NAMESPACE=$(oc project -q)
podname=$(oc get pod -n $NAMESPACE | grep ibm-cp4a-operator | awk '{print $1}')
oc cp PATH_TO_JDBC/jdbc $NAMESPACE/$podname:/opt/ansible/share
The
PATH_TO_JDBC is the path to the driver files on your system.
To verify that the files are in the pod, run the following commands:
oc exec -n $NAMESPACE $podname -- ls -lR /opt/ansible/share
- Optional:
If you intend to install Content Collector for SAP as an optional component of the Content
Manager pattern, then you must download the necessary libraries, put them in a directory, and copy
the files to the operator pod.
-
Make a saplibs directory.
Give read and write permissions to the directory by running the chmod
command.
-
Download the SAP Netweaver SDK 7.50 library from the SAP Service Marketplace.
-
Download the SAP JCo Release 3.0.x from the SAP Service Marketplace.
-
Extract all of the content of the packages to the saplibs directory.
-
Check you have all of the following libraries.
saplibs/
├── libicudata.so.50
├── libicudecnumber.so
├── libicui18n.so.50
├── libicuuc.so.50
├── libsapcrypto.so
├── libsapjco3.so
├── libsapnwrfc.so
└── sapjco3.jar
-
Copy the SAP files to the operator pod.
oc cp PATH_TO_SAPLIBS/saplibs $NAMESPACE/$podname:/opt/ansible/share
The
PATH_TO_SAPLIBS is the path to the driver files on your system.
To verify that the files are in the pod, run the following command:
oc exec -n $NAMESPACE $podname -- ls -lR /opt/ansible/share
Results
When the script is finished, all of the available storage class names are displayed along with
the infrastructure node name. Take a note of the following information and provide it to the Cloud
Pak admin user as they are needed for the deployment script:
- Project name or namespace.
- Storage class names.
- Username to log in to the cluster.
What to do next
You can see the list of operators that are installed in your cluster on the page. For more information about foundational services, see IBM Cloud Pak foundational services operators and
versions.
To verify the Common Services installation, check whether all the pods in the
ibm-common-services namespace are running. Use the following command:
oc get pods -n ibm-common-services
You can also use the following command to verify whether the services are successfully
installed:
oc -n ibm-common-services get csv
Change the admin user for IAM
The installation of IBM Cloud Pak foundational services creates an admin user,
who is a cluster administrator. To avoid the admin user from being removed when you
uninstall foundational services, you can customize the username by adding the
defaultAdminUser parameter to the OperandConfig instance in
the ibm-common-services namespace. Set a custom name that is not
admin.
- name: ibm-iam-operator
spec:
authentication:
config:
defaultAdminUser: <custom-username>
You can access the common-service instance by using the OpenShift Container
Platform console or by using the command-line interface (CLI).
-
In the console, use the following steps:
- From the navigation menu, click
.
- Click the overflow menu icon of the
common-service instance, and click
Edit OperandConfig.
-
To use the CLI, run the following command:
oc edit OperandConfig common-service -n ibm-common-services
Continue to prepare everything that you need for each capability that you want to install in
Preparing capabilities.