To install the Cloud Pak capabilities with the Cloud Pak operator, a cluster
administrator user can run a script to set up the cluster. The administrator must also provide
information that they get from the script to a non-administrator user so they can run the deployment
script.
Before you begin
Important: The Cloud Pak cannot be installed on a cluster with an existing installation
of IBM Automation foundational that used the All namespaces on the cluster
option. Check the openshift-operators
namespace to find installed operators. The
Cloud Pak supports installation on a single namespace and not on all namespaces. To install more
than one deployment of the Cloud Pak, each deployment must be installed in a different namespace and
the operator needs to be installed for each namespace.
About this task
The cluster setup script is one of three core scripts (cluster setup, deployment, and
post-deployment) that are provided to help you install the Cloud Pak capabilities. You must be a
cluster administrator to run the setup script. For more information, see Targeted role-based user
archetypes.
Note: The cluster setup script installs a set of Cloud Platform Foundation Services for the Cloud
Pak.
The cluster setup script identifies or creates a namespace and applies the custom resource
definitions (CRD). It then adds the specified user to the ibm-cp4a-operator
role,
binds the role to the service account, and applies a security context constraint (SCC) for the Cloud
Pak.
The script also prompts the administrator to take note of the cluster hostname and a dynamic
storage class on the cluster. These names must be provided to the user who runs the deployment
script.
Note: The scripts can be used only on Red Hat (RHEL), CentOS, or a client to a Linux-based machine
or virtual machine that can run Podman. The setup script does not set any parameters in the custom
resource (CR). The cluster administrator might be running the script on a different host than the
user who later runs the deployment script.
Use the following steps to complete the setup.
Procedure
-
Download the appropriate repository to a Linux® based
machine (RHEL, CentOS, and so on) or a client to a linux-based machine or VM that runs podman natively, and go to the
cert-kubernetes
directory.
-
Log in to the target cluster as the
<cluster-admin>
user.
Using the OpenShift CLI:
oc login https://<cluster-ip>:<port> -u <cluster-admin> -p <password>
On ROKS, if you are not already logged in:
oc login --token=<token> --server=https://<cluster-ip>:<port>
-
Run the cluster setup script from where you downloaded the cert-kubernetes
repository, and follow the prompts in the command window.
cd scripts
./cp4a-clusteradmin-setup.sh
- Select the platform type: ROKS (1) or OCP (2).
Select Other (3), only if you want to install a standalone deployment of
FNCM or BAW.
- Select the deployment type enterprise.
- Enter the name for a new project or an existing project (
cp4ba-project
) if you
created one when you prepared the storage. For more information, see Preparing storage for the Cloud Pak operator.
- Select a username from the list of eligible users by entering the number associated with that
user.
- Enter your IBM Entitled Registry key and login credentials (user and password).
If you
selected platform type Other, enter your IBM Entitled Registry key or the URL
and your username and password for the image registry where you loaded the container
images.
- Enter a dynamic storage class name.
Note: The following message appears on OCP 4.6, but the warning does not have any functional
impact.
Creating the custom resource definition (CRD) and a service account that has the permissions to manage the resources...
W1102 26405 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
-
Monitor the operator pod until it shows a STATUS of "Running".
oc get pods -w
The following messages show an example of the waiting loop
Waiting for the Cloud Pak operator to be ready. This might take a few minutes...
Waiting for deployment "ibm-cp4a-operator" rollout to finish: 0 of 1 updated replicas are available...
Waiting for deployment spec update to be observed...
Waiting for deployment spec update to be observed...
Waiting for deployment "ibm-cp4a-operator" rollout to finish: 0 out of 1 new replicas have been updated...
Waiting for deployment "ibm-cp4a-operator" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "ibm-cp4a-operator" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "ibm-cp4a-operator" rollout to finish: 1 old replicas are pending termination...
deployment "ibm-cp4a-operator" successfully rolled out
deployment "ibm-cp4a-operator" successfully rolled out
Done
Tip: If
ibm-cp4a-operator is inactive for some time, you can delete
the operator pod and let it reconcile.
To confirm that the operator is stuck, check to see
whether the log is providing an output.
podname=$(oc get pod | grep ibm-cp4a-operator | awk '{print $1}')
oc logs $podname -f
-
Add JDBC drivers to the operator pod for Business Automation Navigator and all of the other
patterns in your deployment that need them.
Copy all of the JDBC drivers that are needed by the components to the operator pod. Depending on
your storage configuration, you might not need all these drivers. For more information about
compatible JDBC drivers, see Db2 JDBC information, Oracle JDBC information, SQL Server JDBC information, and PostgreSQL JDBC information. The following .jar files are
examples.
- Db2®
- db2jcc4.jar
- db2jcc_license_cu.jar
- Oracle
- Microsoft SQL Server
- mssql-jdbc-8.2.2.jre8.jar
- PostgreSQL
The following structure shows an example remote file system. The jdbc
directory and the subfolder name for the database must be created.
/root/operator
└── jdbc
├── db2
├── db2jcc4.jar
└── db2jcc_license_cu.jar
├── oracle
├── ojdbc8.jar
└── orai18n.jar
├── sqlserver
└── mssql-jdbc-8.2.2.jre8.jar
├── postgresql
└── postgresql-42.2.9.jar
Copy the JDBC files to the operator pod.
podname=$(oc get pod | grep ibm-cp4a-operator | awk '{print $1}')
kubectl cp $PATH_TO_JDBC/jdbc ${NAMESPACE}/$podname:/opt/ansible/share
Note: The $PATH_TO_JDBC is the path to the driver files on your system. The
${NAMESPACE} must be set to the namespace of the installed operator.
To verify that the files are in the pod, run the following commands:
oc get pod | grep ibm-cp4a-operator | awk '{print $1}'
The output provides the name of the pod: ibm-cp4a-operator-<ten characters>-<five
characters>
oc rsh ibm-cp4a-operator-<ten characters>-<five characters>
ls -lR /opt/ansible/share
/opt/ansible/share:
total 0
drwxrwxrwx. 3 1000600000 root 1 jdbc
/opt/ansible/share/jdbc:
total 0
drwxrwxrwx. 2 1000600000 root 2 db2
/opt/ansible/share/jdbc/db2:
total 6399
-rw-r--r--. 1 1000600000 root 6550443 db2jcc4.jar
-rw-r--r--. 1 1000600000 root 1529 db2jcc_license_cu.jar
exit
- Optional:
If you intend to install Content Collector for SAP as an optional component of the Content
Manager pattern, then you must download the necessary libraries, put them in a directory, and copy
the files to the operator pod.
-
Make a saplibs directory.
Give read and write permissions to the directory by running the chmod
command.
-
Download the SAP Netweaver SDK 7.50 library from the SAP Service Marketplace.
-
Download the SAP JCo Release 3.0.x from the SAP Service Marketplace.
-
Extract all of the content of the packages to the saplibs directory.
-
Check you have all of the following libraries.
saplibs/
├── libicudata.so.50
├── libicudecnumber.so
├── libicui18n.so.50
├── libicuuc.so.50
├── libsapcrypto.so
├── libsapjco3.so
├── libsapnwrfc.so
└── sapjco3.jar
-
Copy the SAP files to the operator pod.
podname=$(oc get pod | grep ibm-cp4a-operator | awk '{print $1}')
kubectl cp $PATH_TO_SAPLIBS/saplibs ${NAMESPACE}/$podname:/opt/ansible/share
Note: The $PATH_TO_SAPLIBS is the path to the driver files on your system. The
${NAMESPACE} must be set to the namespace of the installed operator.
To verify that the files are in the pod, run the following commands:
oc rsh $(oc get pod | grep ibm-cp4a-operator | awk '{print $1}')
ls -ltr /opt/ansible/share/saplibs
-
For
21.0.1 If you intend to include
Business Automation Workflow or Business Automation Insights as an optional component in your
deployment, create a secret with the name ibm-entitlement-key by using your
<user_password> for the IBM Entitled Registry.
From the OCP CLI, run the following command:
kubectl create secret docker-registry ibm-entitlement-key -n <project_name> \
--docker-username=cp \
--docker-password="<user_password>" \
--docker-server=cp.icr.io
The <project_name> must be set to the project that you created and prepared
for the operator. For more information, see Preparing the operator and log file storage.
Results
When the script is finished, all of the available storage class names are displayed along with
the infrastructure node name. Take a note of the following information and provide it to the Cloud
Pak admin user as they are needed for the deployment script:
- Project name or namespace.
- For
21.0.1 Route hostname.
- Storage class names.
- Username to log in to the cluster.
What to do next
You can see the list of services that are installed in your cluster on the page. For more information about foundational services, see IBM Cloud Pak foundational services operators and
versions.
To verify the Common Services installation, check whether all the pods in the
ibm-common-services
namespace are running. Use the following command:
oc get pods -n ibm-common-services
You can also use the following command to verify whether the services are successfully
installed:
oc -n ibm-common-services get csv
Continue to prepare everything that you need for the Cloud Pak operator and each capability that
you want to install.
You can then generate the custom resource (CR) file by using another script Generating the custom resource.