Configuring S3 backup settings for fresh install of the Management subsystem

You can configure backups for your management subsystem in your Kubernetes environment.

Before you begin

Review Backing up and restoring the management subsystem.
If you have a two data center disaster recovery deployment, before you configure S3 backup settings you must first disable 2DCDR on both sites. Follow these steps:
  1. Remove the multiSiteHA section from the spec section of the management CRs on both data centers. Take note of which data center is the active and which is the warm-standby. Keep a copy of the multiSiteHA sections that you remove.
  2. Configure your S3 backup settings. You must specify different backup locations for each data center, see Backup and restore requirements for a two data center deployment.
  3. Add the multiSiteHA sections back to the management CRs in both data centers, ensuring that your original active is set to be the active again.

About this task

If you haven't already, configure your management subsystem custom resource with the databaseBackup subsection.

Note: S3Provider supports custom S3 solutions in 10.0.2.0 and greater. The support includes new backup parameters backupCerts and backups3URIStyle.

Important: For S3 backups of the management subsystem, do not use retention features provided by Cloud-based S3 storage providers. Use of such features can result in periodic deletion of archived backups, which can cause backup corruptions that can cause database restores to fail.
Warning:
When configuring a new management subsystem for S3 backup, the content of the S3 bucket path must be empty. If you want to configure a management subsystem to use the content of an S3 bucket path already used by another management subsystem, you have two options:
  1. Shutdown the postgres database on the existing management subsystem that points to the S3 bucket path before you start the new management subsystem.
  2. In a disaster recovery scenario where you want to restore from the backup of an existing installation: Create a new S3 path and use S3 tools to copy the backup content from the existing management subsystem S3 bucket path, to the new bucket path.

Procedure

  1. Create a backup secret.

    The backup secret is a Kubernetes secret that contains your username and password for your S3 backup database.

    For examples of how to generate the appropriate access key and secret for two of them, see:

    The secret can be created with the following command:
    $ kubectl create secret generic mgmt-backup-secret --from-literal=key='<YOUR ACCESS KEY>' \
       --from-literal=keysecret='<YOUR ACCESS KEY SECRET>' -n <namespace_of_management_subsystem>
  2. Ensure that your Management subsystem custom resource is configured with the databaseBackup subsection.

    For example:

    databaseBackup:
      protocol: objstore
      s3provider: ibm
      host: s3.eu-gb.cloud-object-storage.appdomain.cloud/eu-standard
      path: apic-backup
      credentials: mgmt-backup-secret
      backupCerts: <custom-s3-server-CA-cert>
      backups3URIStyle: host(default)/path
      schedule: "0 3 * * *"
      repoRetentionFull: 2

    Note: backupCerts and backups3URIStyle are used for custom s3 only, on v10.0.2.0 or greater.

    Table 1. Backup configuration settings
    Setting Description
    protocol The type of the backup. For s3 storage: objstore.

    When the parameter is not set, the backup type defaults to local storage backup.

    s3provider Name of the S3 provider to use. You must specify one of the supported values of aws, ibm, or custom. Support for custom is available in v10.0.2.0 or greater.
    Note: The public certificate on the S3 storage provider must be signed by a known certificate authority that is trusted by API Connect. Use of an untrusted authority can cause the following error during backup upload: x509: certificate signed by unknown authority.
    host For objstore type backup, specify S3 endpoint with the corresponding S3 region in the format <S3endpoint>/<S3region>
    path The path to the location of the backup: For objstore backups, the name of your S3 bucket to store backup data. For example, bucket-name/folder. The use of subdirectories in the bucket name is not supported.
    Note: When your deployment includes a management subsystem in two different clusters (with active databases), the two management subsystems cannot use the same s3 bucket name in the database backup configurations. Each management subsystem must use a unique s3 bucket name.

    Ensure that bucket-name/folder is empty. If the folder was previously used for backups, the folder will not be empty, and stanza create might encounter an error.

    Explanation: Once a folder is used, archive.info and backup.info files are created. During stanza creation, the process compares the database version and database system id between the two info files and the current Postgres database cluster. Stanza creation fails if there is a mismatch. To prevent this possibility, remove all files from the folder before configuring.

    credentials For objstore type backups, the name of the Kubernetes secret containing your S3 Access Key / Key Secret
    backupCerts

    v10.0.2.0 or greater

    Custom certificate. Used only when s3Provider is set to custom. Most of the custom S3 providers are created based on custom CA certificate.

    This field accepts name of the Kubernetes secret containing your upstream custom S3 CA certificate. The key of the secret must be ca.crt and value should be base64 encoded value of CA certificate.

    The API Connect management subsystem validates the custom S3 certificate against the customer-provided CA certificate.

    Sample bash script to create the Kubernetes secret:
    cat >customs3ca.yaml <<EOF
    apiVersion: v1
    data:
      ca.crt: $(base64 <path-to-ca-certificate> | tr -d '\n' )
    kind: Secret
    metadata:
      name: custom-server-ca
    type: generic
    EOF
    
    kubectl apply -f customs3ca.yaml -n <namespace>
    backups3URIStyle v10.0.2.0 or greater

    Valid values: host and path.

    Only allowed if s3Provider is set to custom.

    Some custom S3 providers require URI style to be set to path. For example, minIO supports both host and path style setup. You can create a minIO S3 server to only accept path style client communications.

    Important: Contact your custom S3 administrator before configuring this field. If not properly configured, upstream custom S3 can reject connections from the client, such as API Connect management subsystem.

    schedule Cron like schedule for performing automatic backups. The format for the schedule is:
    • * * * * *
    • - - - - -
    • | | | | |
    • | | | | +----- day of week (0 - 6) (Sunday=0)
    • | | | +------- month (1 - 12)
    • | | +--------- day of month (1 - 31)
    • | +----------- hour (0 - 23)
    • +------------- min (0 - 59)

    The timezone for backups is that of the node on which the postgres-operator pod is scheduled.

    There is no default backup schedule set. Be sure to set your backup schedule.

    All scheduled Management subsystem backups are of type full only.

    repoRetentionFull v10.0.3.0 or later - The number of full S3 backups to retain. When the next full backup successfully completes, and the specified number of retained backups is reached, the oldest full backup is deleted. All incremental backups and archives associated with the oldest full backup also expire. Incremental backups are not counted for this setting.

    Applies to both manual backups and scheduled backups.

    Minimum value: 1

    Maximum value: 9999999

    Default: none

  3. Verify that the configuration deploys successfully.

    When you configure Management subsystem backups, the operator runs a stanza-create job. This job creates the stanza (Postgres cluster configuration) on the upstream S3 server, which is used for backup and archive procedures. The job also brings up the necessary pod.

    Check the status of the stanza-create job:
    kubectl get jobs -n <namespace> | grep stanza
    • On success:
      • The stanza-create job status is 1/1:
        kubectl get jobs -n <namespace> | grep stanza
        m1-b722e361-postgres-stanza-create          1/1           5s         127m
      • The stanza-create job shows status: "True" and type: Complete:
        kubectl get job -n <namespace> m1-b722e361-postgres-stanza-create -o yaml
        
        status:
          completionTime: "2021-04-13T18:38:50Z"
          conditions:
            - lastProbeTime: "2021-04-13T18:38:50Z"
              lastTransitionTime: "2021-04-13T18:38:50Z"
              status: "True"
              type: Complete
         startTime: "2021-04-13T18:38:45Z"
         succeeded: 1
      • The stanza-create pod is in Completed state:
        kubectl get pods -n <namespace> | grep stanza
        m1-b722e361-postgres-stanza-create-q4fvp                     0/1     Completed   0          127m
        
      • Exec into the pod:
        kubectl -n <namespace> exec -it <backrest-shared-repo-pod> -- bash
      • The pgbackrest command returns the S3 contents in valid JSON:
        pgbackrest info --output json --repo1-type s3
        
        "backup":{"held":false}},"message":"ok"}}][pgbackrest@m1-b722e361-postgres-backrest-shared-repo-69cdfdfdd5-rs882 /
    • On failure, see Troubleshooting stanza-create job for S3 backup configuration.