Configuring ODF as backup storage for OADP

Learn how to create an object storage bucket in Red Hat OpenShift Data Foundation that Red Hat OpenShift APIs for Data Protection (OADP) can use to store backups in.

Overview

Use these instructions if you want to use Red Hat® OpenShift® Data Foundation as the backup storage for an OADP backup of IBM Cloud Pak® for AIOps. For more information about backing up IBM Cloud Pak for AIOps using OADP, see Installing the backup and restore tools.

Procedure

  1. Create an S3-compatible object storage bucket
  2. Retrieve the S3 bucket information and credentials
  3. Prepare the DataProtectionApplication configuration
  4. Verify that the backupStorageLocation object is created
  5. Configure the volume snapshot classes for OADP
  6. Back up a project to validate the OADP configuration

1. Create an S3-compatible object storage bucket

  1. Create a YAML file called obc-backup.yml with the following content:

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: backup
      namespace: openshift-adp
    spec:
      storageClassName: openshift-storage.noobaa.io
      generateBucketName: backup
    
  2. Run the following command to create an ObjectBucketClaim object called backup that uses the openshift-storage.noobaa.io storage class.

    oc apply -f obc-backup.yml
    

2. Retrieve the S3 bucket information and credentials

When Red Hat OpenShift Data Foundation creates the backup ObjectBucketClaim instance, it also creates a secret and a configuration map named backup. The backup secret contains the bucket credentials, and the backup configuration map contains the bucket information.

  1. Install s3cmd.

    s3cmd is a command-line tool and client for uploading, retrieving, and managing data in Amazon S3 and other cloud storage service providers.

    Install s3cmd from Amazon S3 Tools.

  2. Retrieve the bucket name and bucket host from the backup configuration map.

    oc extract --to=- cm/backup
    

    Example output:

    # BUCKET_HOST
    s3.openshift-storage.svc
    # BUCKET_NAME
    backup-7d9...f4c
    # BUCKET_PORT
    443
    # BUCKET_REGION
    # BUCKET_SUBREGION
    

  3. Encode the certificate in Base64 format, and make a note of the returned value for use in step 3.1.

    oc get cm/openshift-service-ca.crt -o jsonpath='{.data.service-ca\.crt}' | base64 -w0; echo
    
  4. Retrieve the bucket credentials from the backup secret.

    oc extract --to=- secret/backup
    

    Example output:

    # AWS_ACCESS_KEY_ID
    xxxxxKey_IDzzzzz
    # AWS_SECRET_ACCESS_KEY
    xxxxxxSecretxxxxx
    

  5. Identify the public URL for the S3 endpoint from the s3 route in the openshift-storage namespace.

    oc get route s3 -n openshift-storage
    

    Example output:

    NAME   HOST/PORT                                            PATH                                                           SERVICES   PORT       TERMINATION       WILDCARD
    s3     [s3-openshift-storage.apps.gocpprd1.emn.gdps.gov.sa] (https://s3-openshift-storage.apps.gocpprd1.emn.gdps.gov.sa/)  s3         s3-https   reencrypt/Allow   None
    

  6. Create a file called .s3cfg in your home directory with the S3 credentials and S3 endpoint URL.

    access_key = <AWS_ACCESS_KEY_ID>
    secret_key = <AWS_SECRET_ACCESS_KEY>
    host_base = s3-openshift-storage.apps.gocpprd1.emn.gdps.gov.sa
    host_base = <HOST_BASE>
    host_bucket = <HOST_BASE>/%(bucket)s
    signature_v2 = True
    

    Where

    • <AWS_ACCESS_KEY_ID> and <AWS_SECRET_ACCESS_KEY> are the values returned in step 2.4
    • <HOST_BASE> is the value returned in step 2.5. For example, s3-openshift-storage.apps.gocpprd1.emn.gdps.gov.sa
  7. Validate the configuration

    Run the following command to list the contents of the bucket, and verify that the command returns no output because the bucket is empty. If the configuration is incorrect, then the command returns an error message.

    s3cmd la
    

3. Prepare the DataProtectionApplication configuration

  1. Create a YAML file called dpa-backup.yml. Use the S3 bucket information from the previous steps to configure the default backup storage location.

    Example configuration:

    apiVersion: [oadp.openshift.io/v1alpha1](https://oadp.openshift.io/v1alpha1)
    kind: DataProtectionApplication
    metadata:
      name: oadp-backup
      namespace: openshift-adp
    spec:
      features:
        dataMover:
      configuration:
        velero:
          defaultPlugins:
            - aws
            - openshift
      backupLocations:
        - velero:
            config:
              profile: "default"
              region: "us-east-1"
              s3Url: https://s3.openshift-storage.svc/
              s3ForcePathStyle: "true"
              insecureSkipTLSVerify: "true"
            provider: aws
            default: true
            credential:
              key: cloud
              name:  cloud-credentials
            objectStorage:
              bucket: backup-6bce9ba8-59e1-4b2c-9c17-a88498954a56
              prefix: oadp
              caCert: <LS0tLS1.........>
    

  2. Create the cloud-credentials secret in the openshift-adp namespace with the backup ObjectBucketClaim credentials.

    Create a file called cloud-credentials with the following content:

    [default]
    aws_access_key_id=<AWS_ACCESS_KEY_ID>
    aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
    

    Where <AWS_ACCESS_KEY_ID> and <AWS_SECRET_ACCESS_KEY> are the values retrieved in step 2.4.

    oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=cloud-credentials
    
  3. Create the restic-enc-key secret in the openshift-adp namespace, with an encryption key to use for application data.

    Run the following command, which uses openssl to generate a random password to use as the encryption key.

    oc create secret generic restic-enc-key -n openshift-adp --from-literal=RESTIC_PASSWORD=$(openssl rand -base64 24)
    
  4. Run the following command to apply the OADP configuration.

    oc apply -f dpa-backup.yml
    

4 Verify that the backupStorageLocation object is created

Run the following command to verify that the backupStorageLocation object is created, and has a phase of Available.

oc get backupStorageLocation

Example output:

NAME           PHASE       LAST VALIDATED   AGE     DEFAULT
oadp-backup    Available   1s               3d16h   true

5. Configure the volume snapshot classes for OADP

  1. Run the following command to list all the available volume snapshot classes.

    oc get volumesnapshotclass
    
  2. Set the deletionPolicy attribute to Retain for each volume snapshot class.

    Run the following command:

    for class in \
    $(oc get volumesnapshotclass -oname); do
    oc patch $class --type=merge -p '{"deletionPolicy": "Retain"}'
    done
    
  3. Set the velero.io/csi-volumesnapshot-class label to true for each volume snapshot class.

    Run the following command:

    oc label volumesnapshotclass velero.io/csi-volumesnapshot-class="true" --all
    

6. (Optional) Back up a project to validate the OADP configuration

Optionally test your configuration by backing up the resources in a test namespace. This example uses a test namespace called production.

  1. Create a file called up backup.yml with the following content.

    apiVersion: velero.io/v1
    kind: Backup
    metadata:
      name: backup-production
      namespace: openshift-adp
    spec:
      includedNamespaces:
      - production
    
  2. Run the following command to apply the backup configuration.

    oc apply -f backup.yml
    
  3. Wait a couple of minutes and check the backup completion status. Verify that the backup object has a phase of Completed.

    oc describe backup backup-production
    

    Example output:

    Status:
      Completion Timestamp: 2024-05-22T13:52:25Z
      Expiration:            2024-06-21T13:51:33Z
      Format Version:        1.1.0
      Phase:                 Completed
      Progress:
        Items Backed Up:  55
        Total Items:      55
    

  4. Run the following command to review the contents of the S3 storage.

    s3cmd la -r
    

    Example output:

    $ s3cmd la -r
    2024-05-22 13:52   2597  s3://backup-6bce9ba8-59e1-4b2c-9c17-a88498954a56/docker/registry/v2/blobs/sha256/05/057c885bbca68570905d4a9e26948b6fa07e06ecc80951aa0def58cd227a2513/data
    2024-05-22 13:51   4512  s3://backup-6bce9ba8-59e1-4b2c-9c17-a88498954a56/docker/registry/v2/blobs/sha256/07/07dc0ebf5f604feb9222bc7187a18dff7cb8d48b56cdacf0d1e564f20301b292/data
    2024-05-22 13:52   455  s3://backup-6bce9ba8-59e1-4b2c-9c17-a88498954a56/docker/registry/v2/blobs/sha256/13/1367c2cdbbe1a194f979ff4298091d41ab445934892cb325f8cc715812bc9f2d/data