Configuring ODF as backup storage for OADP
Learn how to create an object storage bucket in Red Hat OpenShift Data Foundation that Red Hat OpenShift APIs for Data Protection (OADP) can use to store backups in.
Overview
Use these instructions if you want to use Red Hat OpenShift Data Foundation as the backup storage for an OADP backup of IBM Cloud Pak for AIOps. For more information about backing up IBM Cloud Pak for AIOps using OADP, see Installing the backup and restore tools.
Procedure
- Create an S3-compatible object storage bucket
- Retrieve the S3 bucket information and credentials
- Prepare the DataProtectionApplication configuration
- Verify that the backupStorageLocation object is created
- Configure the volume snapshot classes for OADP
- Back up a project to validate the OADP configuration
1. Create an S3-compatible object storage bucket
-
Create a YAML file called
obc-backup.ymlwith the following content:apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: backup namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: backup -
Run the following command to create an ObjectBucketClaim object called
backupthat uses theopenshift-storage.noobaa.iostorage class.oc apply -f obc-backup.yml
2. Retrieve the S3 bucket information and credentials
When Red Hat OpenShift Data Foundation
creates the backup ObjectBucketClaim instance, it also
creates a secret and a configuration map named backup.
The backup secret contains the bucket credentials, and
the backup configuration map contains the bucket
information.
-
Retrieve the bucket name and bucket host from the
backupconfiguration map.oc extract --to=- cm/backupExample output:
# BUCKET_HOST s3.openshift-storage.svc # BUCKET_NAME backup-7d9...f4c # BUCKET_PORT 443 # BUCKET_REGION # BUCKET_SUBREGION -
Encode the certificate in Base64 format, and make a note of the returned value for use in step 3.1.
oc get cm/openshift-service-ca.crt -o jsonpath='{.data.service-ca\.crt}' | base64 -w0; echo -
Retrieve the bucket credentials from the
backupsecret.oc extract --to=- secret/backupExample output:
# AWS_ACCESS_KEY_ID xxxxxKey_IDzzzzz # AWS_SECRET_ACCESS_KEY xxxxxxSecretxxxxx -
Identify the public URL for the S3 endpoint from the s3 route in the
openshift-storagenamespace.oc get route s3 -n openshift-storageExample output:
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD s3 [s3-openshift-storage.apps.gocpprd1.emn.gdps.gov.sa] (https://s3-openshift-storage.apps.gocpprd1.emn.gdps.gov.sa/) s3 s3-https reencrypt/Allow None
3. Prepare the DataProtectionApplication configuration
-
Create a YAML file called
dpa-backup.yml. Use the S3 bucket information from the previous steps to configure the default backup storage location.Example configuration:
apiVersion: [oadp.openshift.io/v1alpha1](https://oadp.openshift.io/v1alpha1) kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: features: dataMover: configuration: velero: defaultPlugins: - aws - openshift backupLocations: - velero: config: profile: "default" region: "us-east-1" s3Url: https://s3.openshift-storage.svc/ s3ForcePathStyle: "true" insecureSkipTLSVerify: "true" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: backup-6bce9ba8-59e1-4b2c-9c17-a88498954a56 prefix: oadp caCert: <LS0tLS1.........> -
Create the
cloud-credentialssecret in theopenshift-adp namespacewith thebackupObjectBucketClaim credentials.Create a file called
cloud-credentialswith the following content:[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>Where
<AWS_ACCESS_KEY_ID>and<AWS_SECRET_ACCESS_KEY>are the values retrieved in step 2.3.oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=cloud-credentials -
Create the
restic-enc-keysecret in theopenshift-adpnamespace, with an encryption key to use for application data.Run the following command, which uses openssl to generate a random password to use as the encryption key.
oc create secret generic restic-enc-key -n openshift-adp --from-literal=RESTIC_PASSWORD=$(openssl rand -base64 24) -
Run the following command to apply the OADP configuration.
oc apply -f dpa-backup.yml
4 Verify that the backupStorageLocation object is created
Run the following command to verify that the
backupStorageLocation object is created, and has a phase of
Available.
oc get backupStorageLocation
Example output:
NAME PHASE LAST VALIDATED AGE DEFAULT
oadp-backup Available 1s 3d16h true
5. Configure the volume snapshot classes for OADP
-
Run the following command to list all the available volume snapshot classes.
oc get volumesnapshotclass -
Set the
deletionPolicyattribute to Retain for each volume snapshot class.Run the following command:
for class in \ $(oc get volumesnapshotclass -oname); do oc patch $class --type=merge -p '{"deletionPolicy": "Retain"}' done -
Set the
velero.io/csi-volumesnapshot-classlabel to true for each volume snapshot class.Run the following command:
oc label volumesnapshotclass velero.io/csi-volumesnapshot-class="true" --all
6. (Optional) Back up a project to validate the OADP configuration
Optionally test your configuration by backing up the resources
in a test namespace. This example uses a test namespace called
production.
-
Create a file called up
backup.ymlwith the following content.apiVersion: velero.io/v1 kind: Backup metadata: name: backup-production namespace: openshift-adp spec: includedNamespaces: - production -
Run the following command to apply the backup configuration.
oc apply -f backup.yml -
Wait a couple of minutes and check the backup completion status. Verify that the backup object has a phase of Completed.
oc describe backup backup-productionExample output:
Status: Completion Timestamp: 2024-05-22T13:52:25Z Expiration: 2024-06-21T13:51:33Z Format Version: 1.1.0 Phase: Completed Progress: Items Backed Up: 55 Total Items: 55