Implementing storage

There are several reasons why you might want to set up your own persistent volume (PV) and persistent volume claim (PVC).

  • You want IBM Resource Registry to be backed up automatically. Otherwise, you can do manual backups.
  •  New in 20.0.2  You want to use shared storage to share the file upload cache among servers for Application Engine.
  • You want to use your own JDBC driver for Application Engine. Otherwise, the operator has a shared PV to store the default JDBC driver.

Procedure

  1. To automatically back up Resource Registry, you must enable persistent storage for the backup using a PV. You can either use dynamic provisioning that you already set up in your cluster, or you can create the PV manually.
    Important: Resource Registry auto-backup is enabled by default for the Pattern installation only. If you do not have the Pattern installation, set up Resource Registry auto-backup to make the Resource Registry stable.
    1. Set resource_registry_configuration.auto_backup.enable to true in the configuration parameters. See Application Engine configuration parameters.
    2. Create the PV.
      • Using dynamic provisioning:
        1. Set resource_registry_configuration.auto_backup.dynamic_provision.enable to true in the configuration parameters.
        2. Get the name of a storage class from your dynamic provisioning setting, by setting resource_registry_configuration.auto_backup.dynamic_provision.storage_class to the name of the storage class that you chose. See Dynamic provisioning.
      • Creating the PV manually:
        1. Set resource_registry_configuration.auto_backup.dynamic_provision.enable to false in the configuration parameters.
        2. Create the PV and PVC using the example below. Set resource_registry_configuration.auto_backup.pvc_name to the name of the PV that you create.

          The PV must be shareable by pods across the whole cluster. For a single-node Kubernetes cluster, you can create a hostPath PV. For multiple nodes in a cluster, create the PV using shareable storage, such as Network File System (NFS) or Gluster File System (GlusterFS).

          The following example shows an NFS PV.
          kind: PersistentVolume
          apiVersion: v1
          metadata:
            name: rr-pv-volume
            labels:
              type: nfs
          spec:
            storageClassName: manual
            capacity:
              storage: 3Gi
            accessModes:
              - ReadWriteMany
            nfs:
              path: /mnt/dba/rrdata
              server: 172.17.0.2
          
          Create the PVC to bind the correct PV to the selector.
          The following example shows a PVC.
          kind: PersistentVolumeClaim
          apiVersion: v1
          metadata:
            name: rr-storage-pvc
          spec:
            storageClassName: manual
            accessModes:
              - ReadWriteMany
            resources:
              requests:
                storage: 3Gi
            volumeName: rr-pv-volume
          
          Note: For the NFS server, you must grant minimal privileges,
          1. In the /etc/exports configuration file, enter the following line:
            directory_path ip_addess(rw,sync,no_subtree_check)
          2. Give the least privilege to the mounted directories using the following commands:
            chown -R :65534 directory_path
            chmod g+rw directory_path
  2.  New in 20.0.2  To use shared storage to share the file upload cache among servers for Application Engine, you must enable persistent storage for the shared storage using a PV. You can either use dynamic provisioning that you already set up in your cluster, or you can create the PV manually.
    Note: To make the replica size more than 1 for this cluster, you must enable the shared storage.
    1. Set application_engine_configuration[*].share_storage.enabled to true in the configuration parameters. See IBM Business Automation Application Engine configuration parameters
    2. Create the PV.
      • Using dynamic provisioning:
        1. Set application_engine_configuration[*].share_storage.auto_provision.enabled to true in the configuration parameters.
        2. Get the name of a storage class from your dynamic provisioning setting, by setting application_engine_configuration[*].share_storage. auto_provision.storage_class to the name of the storage class that you chose. See Dynamic provisioning.
      • Creating the PV manually:
        1. Set application_engine_configuration[*].share_storage.auto_provision.enabled to false in the configuration parameters.
        2. Create the PV and PVC using the example below. Set application_engine_configuration[*].share_storage.pvc_name to the name of the PV that you create.

          The PV must be shareable by pods across the whole cluster. For a single-node Kubernetes cluster, you can create a hostPath PV. For multiple nodes in a cluster, create the PV using shareable storage, such as Network File System (NFS) or Gluster File System (GlusterFS).

          The following example shows an NFS PV.
          kind: PersistentVolume
          apiVersion: v1
          metadata:
            name: ae-pv-volume
            labels:
              type: nfs
          spec:
            storageClassName: manual
            capacity:
              storage: 20Gi
            accessModes:
              - ReadWriteMany
            nfs:
              path: /mnt/dba/aedata
              server: 172.17.0.2
          
          Create the PVC to bind the correct PV to the selector.
          The following example shows a PVC.
          kind: PersistentVolumeClaim
          apiVersion: v1
          metadata:
            name: ae-file-pvc
          spec:
            storageClassName: manual
            accessModes:
              - ReadWriteMany
            resources:
              requests:
                storage: 3Gi
          
          Note: For the NFS server, you must grant minimal privileges,
          1. In the /etc/exports configuration file, enter the following line:
            directory_path ip_addess(rw,sync,no_subtree_check)
          2. Give the least privilege to the mounted directories using the following commands:
            chown -R :65534 directory_path
            chmod g+rw directory_path
  3. To use Oracle or PostgreSQL, or use your own JDBC driver for Db2® for Application Engine, you must create a PV and PVC to store the driver files.
    1. Create the PV.

      The PV must be shareable by pods across the whole cluster. For a single-node Kubernetes cluster, you can create a hostPath PV. For multiple nodes in a cluster, create the PV using shareable storage, such as Network File System (NFS) or Gluster File System (GlusterFS).

      The following example shows a hostPath PV.
      kind: PersistentVolume
      apiVersion: v1
      metadata:
        name: jdbc-pv-volume
        labels:
          type: local
      spec:
        storageClassName: manual
        capacity:
          storage: 2Gi
        accessModes:
          - ReadWriteMany
        hostPath:
          path: "/mnt/dba/data"
      
      The following example shows an NFS PV.
      kind: PersistentVolume
      apiVersion: v1
      metadata:
        name: jdbc-pv-volume
        labels:
          type: nfs
      spec:
        storageClassName: manual
        capacity:
          storage: 2Gi
        accessModes:
          - ReadWriteMany
        nfs:
          path: /mnt/dba/data
          server: 172.17.0.2
      
    2. Create the PVC to bind the correct PV to the selector. Or, if you are using GlusterFS with dynamic allocation, create the PVC with the correct storageClassName to allow the PV to be created automatically.
      The following example shows a PVC.
      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: jdbc-pvc
      spec:
        storageClassName: manual
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 2Gi
      
      The mounted directory must contain a jdbc subdirectory, which in turn holds subdirectories with the required JDBC driver files. Add the following structure to the mounted directory (which in this case is called /mnt/dba/data):
      • For Db2:
        /mnt/dba/data
          /jdbc
            /db2
              /db2jcc4.jar
              /db2jcc_license_cu.jar
      •  New in 20.0.3  For Oracle:
        /mnt/dba/data
          /jdbc
            /oracle
              /ojdbc8.jar
            /oracle_node
              /…
              /…
        
      •  New in 20.0.3  For PostgreSQL:
        /mnt/dba/data
          /jdbc
            /postgresql
              /postgresql-42.2.16.jar
        
      • The /jdbc folder and its contents depend on the configuration. Copy the JDBC driver files to the mounted directory as shown in the previous example (/db2 for Db2, /oracle for Oracle, and /postgresql for PostgreSQL).
      • For PostgreSQL, download the JDBC driver files from https://jdbc.postgresql.org/download.html.
      • For Oracle, besides the JDBC driver files, ODPI-C applications with Oracle Instant Client are also required to connect to the database: download Oracle Instant Client Zip from https://www.oracle.com/database/technologies/instant-client.html according to your operating system. Unzip the file and copy the extracted files to the mounted directory as shown in the previous example /oracle_node.
      • Make sure those files have the correct access. IBM Cloud Pak® for Automation products on OpenShift use an arbitrary UID to run the applications. Grant privileges as described in the following note.
      Note: For the NFS server, you must grant minimal privileges,
      1. In the /etc/exports configuration file, enter the following line:
        directory_path ip_addess(rw,sync,no_subtree_check)
      2. Give the least privilege to the mounted directories using the following commands:
        chown -R :65534 directory_path
        chmod g+rw directory_path
    3. Add the storage information to the configuration parameters.
      To use your own JDBC driver for Application Engine, set application_engine_configuration.use_custom_jdbc_drivers to true, and put your PVC into application_engine_configuration.database.custom_jdbc_pvc in the configuration parameters. See Application Engine configuration parameters.