Configuring backup after Guardium Insights installation
Create a PVC for installation to run a successful backup.
Before you begin
When you apply the patch, the "claimName":
in the oc patch
command must match the name of the PVC that you create.
Procedure
-
Deploy a Network File System (NFS) to your GuardiumĀ® Insights cluster. You can deploy an NFS in multiple ways.For example, you can clone the repo in your terminal by running the following command:
git clone https://github.com/kubernetes-incubator/external-storage.git kubernetes-incubator
For this example, use the kubernetes-incubator-staging folder. This folder contains rbac.yaml and deployment.yaml with the staging namespace already configured.
- Change the
PROVISONER_NAME
value fromvalue:fuseim.pri.ifs
tovalue:storage.io/nfs
. - Update class.yaml so the
PROVISONER_NAME
matches the one from the previous step. - Deploy the modifications.
oc create -f deploy/class.yaml oc create -f deploy/deployment.yaml
- Change the
- Create a persistent volume (PV) and persistent volume claim (PVC) in accordance with the
NFS from step 1. These examples show you how to create
the PV and PVC - but you might need to adjust them according to your needs:
- Use the yaml file backuppv.yaml.
# This yaml file is to be used to create a PV based on the existing NFS: apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: storage.io/nfs name: i-am-nfs-v320-backup spec: accessModes: - ReadWriteMany capacity: storage: 500Gi nfs: path: /data/insights server: 10.21.42.111 persistentVolumeReclaimPolicy: Retain storageClassName: managed-nfs-storage volumeMode: Filesystem
- To create and apply the PV, run the following
commands:
Theoc project staging oc apply -f backuppv.yaml
staging
value is the namespace where Guardium Insights is in. - Create a PVC yaml file and apply it in the same manner as the PV.
The following example shows a sample PVC yaml file:
# This yaml file is to be used to create a PVC based on the existing PV: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: backup-pvc-support # This is the name the will be defined by the customer and passed into the oc patch commands under the claimName property. annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: - ReadWriteMany resources: requests: storage: 500Gi # Size of the storage that the PVC will obtain from the PV claimRef: namespace: staging name: i-am-nfs-v320-backup # Name of the PV previously configured with the StorageClassName
- Use the yaml file backuppv.yaml.
- Edit the Guardium Insights custom resource (CR)
with backup values by using the code from the following example: The following list contains the
name
values:- Postgres
name:gi-postgres-backup
- MongoDB
name:gi-backup-support-mount
- Db2
name:gi-backup-support-mount
oc patch guardiuminsights $(oc get guardiuminsights -o jsonpath='{range.items[*]}{.metadata.name}') --type merge -p '{"spec":{"guardiumInsightsGlobal":{"backupsupport":{"enabled":"true","name":"backup-support-pvc"}}}}'
Note:- If the PVC is automatically mounted, it has the
"storageClassName":
value as"rook-cephfs"
. If the value is"managed-nfs-storage"
, run the patch command in step 4. - The PVC must be specified in the Guardium Insights CR
under the
guardiumInsightsGlobal.backupsupport.name
section whenguardiumInsightsGlobal.backupsupport.enabled
is set totrue
.
- Postgres
- Mount Postgres to the NFS PV from step
2.
oc patch postgres-sts oc get guardiuminsights -o jsonpath='{range .items[*]}{.metadata.name}')-postgres-keeper --type='json' -p '[{"op":"add","path":"/spec/template/spec/volumes/2","value":{"name":"gi-postgres-backup", "persistentVolumeClaim":{"claimName":"backup-pvc-support"}}},{"op":"add","path":"/spec/template/spec/containers/0/volumeMounts/3", "value":{"mountPath":"/opt/data/backup","name":"gi-postgres-backup"}}]'
- Mount the MongoDB Community container to the NFS PV from step 2.
- Update your
claimName
to name of your PVC volume with the following code.oc patch $(oc get mongodbcommunity -oname) --type='json' -p '[{"op":"add","path":"/spec/statefulSet/spec/template/spec/vol umes","value": [{"name":"$BACKUP_MONGO_CLAIM_NAME","persistentVolumeClaim": {"claimName":"$BACKUP_PVC_NAME"}}]}]'
- Update the MongoDB Community container with the
volumeMounts
section.MONGOD_CONTAINER_JSON=$(oc get $(oc get mongodbcommunity - oname) -ojson | jq '.spec.statefulSet.spec.template.spec.containers[0]' | jq -- arg backup_claim "$BACKUP_MONGO_CLAIM_NAME" '. + { "volumeMounts": [ { "name": "$BACKUP_MONGO_CLAIM_NAME", "mountPath": "/opt/data/backup" } ] }' -c) echo $MONGOD_CONTAINER_JSON
- Install the patch for MongoDB.
oc patch $(oc get mongodbcommunity -oname) --type='json' -p "[{\"op\":\"replace\",\"path\":\"/spec/statefulSet/spec/templa te/spec/containers/0\",\"value\":$MONGOD_CONTAINER_JSON}]"
- Verify completion by running the following code.
oc get mongodbcommunity -oyaml
- Update your
- Mount the
db2ucluster
to the NFS PV from step 2.oc patch db2ucluster $(oc get guardiuminsights -o jsonpath='{range .items[*]}{.metadata.name}')-db2 --type='json' -p '[{"op":"add","path":"/spec/storage/3","value":{"name":"backup","claimName":"backup-pvc-support", "spec":{"resources":{}},"type":"existing"}}]'
Tip: TheclaimName
for all three databases isbackup-pvc-support
. - Verify the mounting of Postgres, MongoDB Community, and Db2:
oc describe pvc <pvc_name>