All instances of an operator need a place to store its log files and find database
drivers. If you plan to run the deployment script to generate a custom resource (CR), the script
creates a persistent volume claim (PVC) and copies the JDBC drivers for you. However, if you
manually compile the CR then you must review all of the steps.
About this task
You must prepare the storage of the operator before you create an instance of the operator. You
can use the deployment script to create the operator instance or create it manually. If you choose
to manually compile your CR file from a descriptor template, then you also need to install the
operator and create the necessary storage for it.
Tip: The cluster setup script identifies the available storage classes on your cluster,
but you can create a new PV for the operator. The name of the PV must be set in the PVC, so make
sure that the storageClassName has the correct value. If you use static
storage, make sure that you grant group write permission to the nfs.path on the
host or your shared volume on your NFS server.
Important: If you plan to run the installation scripts and want to use the default storage, decide whether to create a new namespace before you run the scripts. You can create a namespace beforehand or when you run the setup cluster script. If you do not want to use the IBM Entitled Registry to pull the container images, then you need a namespace to load the images to a target registry. For more information, see
Getting access to images from an offline (private) image registry.
If you do not
intend to run the scripts, then complete all of the steps that apply to your
configuration.
Procedure
-
Log in to your OpenShift Container Platform (OCP) cluster.
oc login https://<cluster-ip>:<port> -u <cluster-admin> -p <password>
- Optional:
Create a project (
cp4a-project
) for the operator by running the following
command.
oc new-project <project_name> --description="<description>" --display-name="<display_name>"
- Optional:
Create the YAML resources for the operator and component logs.
- Choice 1:
- If you want to use static storage, create a PV YAML file, for example
operator-shared-pv.yaml. The following example YAML defines two PVs, one for
the operator and one shared volume for the Ansible logs for the deployment. PVs depend on your
cluster configuration, so adapt the YAML to your configuration.
apiVersion: v1
kind: PersistentVolume
metadata:
name: operator-shared-pv
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
nfs:
path: /shared/operator
server: <NFS Server>
persistentVolumeReclaimPolicy: Retain
New in 20.0.3
---
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
type: local
name: cp4a-shared-log-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
path: /root/logs
server: <NFS Server>
persistentVolumeReclaimPolicy: Delete
Replace
<NFS Server>
with the actual server name.
- If you did the previous step, deploy the
PVs.
oc create -f operator-shared-pv.yaml
- If you did the previous steps, provide group write permission to the persistent
volumes. According to the PV
nfs.path
definitions, run the following
commands:chown -R :65534 <path>
chmod -R g+rw <path>
Where
<path> is the value in your PVs (/root/operator and
/root/logs). Group ownership must be set to the anongid
option
given in the NFS export definition of the NFS server associated with the PV. The default
anongid
value is 65534.
Remove the .OPERATOR_TYPE
file in
case it exists from a previous
deployment.
rm -f <path>.OPERATOR_TYPE
Where
<path> is the value in your operator PV
(/root/operator).
- Create a claim for the static PVs.
To create a claim bound to the previously created PVs,
create a file <path>/operator-shared-pvc.yaml
anywhere on your disk, with the
following
content.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: operator-shared-pvc
namespace: <project_name>
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: operator-shared-pv
New in 20.0.3
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cp4a-shared-log-pvc
namespace: <project_name>
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
volumeName: cp4a-shared-log-pv
Replace
the <project_name>
placeholders with the name of your
OpenShift project to use for the operator in your OCP cluster.
- Deploy the PVCs. If you created your own
operator-shared-pvc.yaml
file, run the
following command with your own path.
oc create -f <path>/operator-shared-pvc.yaml
- Choice 2:
If you prefer to use dynamic provisioning for your claim, do the following steps.
- Edit https://github.com/icp4a/cert-kubernetes/blob/20.0.3/descriptors/operator-shared-pvc.yaml and replace the
<StorageClassName>
and
<Fast_StorageClassName>
placeholders by storage classes of
your choice.
- Deploy the PVCs. If you created your own
operator-shared-pvc.yaml
file, run the
following command with your own
path.oc create -f <path>/operator-shared-pvc.yaml
Otherwise, if you
edited
descriptors/operator-shared-pvc.yaml
run the command with the file from the
descriptors
folder.
oc create -f descriptors/operator-shared-pvc.yaml
- Optional:
For Business Automation Navigator and the FileNet® Content Manager (
content
)
pattern with static storage, add JDBC drivers to the operator PV
<path>.
Copy all of the JDBC drivers that are needed by the components to the persistent volume.
Depending on your storage configuration, you might not need these drivers.
- Db2®
- db2jcc4.jar
- db2jcc_license_cu.jar
- Oracle
- Microsoft SQL Server
- mssql-jdbc-8.2.2.jre8.jar
- PostgreSQL
The following structure shows an example remote file system.
/root/operator
└── jdbc
├── db2
├── db2jcc4.jar
└── db2jcc_license_cu.jar
├── oracle
└── ojdbc8.jar
├── sqlserver
└── mssql-jdbc-8.2.2.jre8.jar
├── postgresql
└── postgresql-42.2.9.jar
- Optional:
New in 20.0.3 If you intend to install
Content Collector for SAP as an optional component of the Content Manager pattern, then you must
download the necessary libraries and put them in a directory under
cert-kubernetes/scripts.
-
Make a saplibs directory in
cert-kubernetes/scripts.
Give read and write permissions to the directory by running the chmod
command.
-
Download the SAP Netweaver SDK 7.50 library from the SAP Service Marketplace.
-
Download the SAP JCo Release 3.0.x from the SAP Service Marketplace.
-
Extract all of the content of the packages to the saplibs directory.
-
Check you have all of the following libraries.
saplibs/
├── libicudata.so.50
├── libicudecnumber.so
├── libicui18n.so.50
├── libicuuc.so.50
├── libsapcrypto.so
├── libsapjco3.so
├── libsapnwrfc.so
└── sapjco3.jar
Results
Wait for the confirmation message that the PVC is bound before you move to the next step.
What to do next
You can now set up your cluster manually or use the setup cluster script. For more information,
see Setting up the cluster by running a script.