Back up and restore Postgres data.
Before you begin
Before you back up and restore Postgres data, you need a functioning Amazon Simple Storage
Service (S3) Object Storage system in place.
- For demonstrations or test clusters, you can use the object storage that is provided by Rook Ceph®. For more information, see Create a Local Object Store.
- For production clusters, contact Red Hat® OpenShift® and Kubernetes
subject matter experts within your organization to determine an appropriate S3 object storage
solution.
If you use EnterpriseDB (EDB) Postgres, before you use backup and restore features, delete and
re-create one of the custom resource definitions (CRDs) for EDB.
-
Log in to your cluster as an admin user and enter the following command:
oc delete crd backups.postgresql.k8s.enterprisedb.io
-
Download this
txt file and save it as a YAML file with the name
backups.crd.yaml.
-
Run the following command:
oc create -f backups.crd.yaml
Procedure
The backup and restore procedure comprises the following steps:
-
Creating the secret. This involves identifying the S3 credentials to use, encoding them and then using that information to create a secret in Netcool® Operations Insight®.
-
Editing the Netcool Operations Insight CRD to include the required backup destination information. This causes the Netcool Operations Insight operator to create the backup resource.
-
Create the secret.
To create the secret, use the following steps:
- Obtain the values for your S3 access key ID, S3 secret access key, and S3 access session token
for your S3 object storage solution.
- Base64 encode each of the keys to find the values.
For example, if your S3 access key ID is
AWS_ACCESS_KEY_ID
, obtain the Base64 encoded value by running the following
command:
echo "AWS_ACCESS_KEY_ID" | base64
Running this command returns encoded data, the value, am9obnNtaXRoCg==
.
Complete this step for your access key ID, access key, and access session token obtained in
Step 1a.
- Create a file named s3-credentials.yaml and include the Base64 encoded S3
credential data from Step 1b. Use the data that is particular to your S3 bucket.
The following example does not require an S3 access session token to use the S3 bucket. Provide the namespace name of your Netcool Operations Insight custom resource. The following example uses a Netcool namespace.
apiVersion: v1
data:
AWS_ACCESS_KEY_ID: QUtJQVFDNFVISTY0VTRDU0RCRlUK
AWS_SECRET_ACCESS_KEY: cU1ZQzVBQWdPcS92RS90VmJFUUJhVGRtU2lnVkZpN0IzMTBPOW83cgo=
kind: Secret
metadata:
name: s3-credentials
namespace: noi-on-ocp
- Create the secret by running the following
command:
oc create -f s3-credentials.yaml
- Edit the Netcool Operations Insight CRD to include the required backup destination information.
You can do this using the OCP console UI or using the command line.
Update or add the following parameters using values for your system:
postgresql:
backups:
data:
compression: gzip
encryption: default
jobs: 1
endpointCA: {}
serverName: noi-on-ocp-backup
onetimeBackup:
enabled: true
retentionPolicy: 12m
endpointURL: 'https://s3.eu-north-1.amazonaws.com/noi-on-ocp/TEST1/32252'
destinationPath: 's3://noi-on-ocp/TEST1'
s3Credentials:
keyNameAccessKeyID: AWS_ACCESS_KEY_ID
keyNameAccessSecretKey: AWS_SECRET_ACCESS_KEY
secretName: s3-credentials
scheduledBackup:
backupOwnerReference: none
immediate: true
schedule: 0 0 0 * * *
bootstrap:
s3Credentials:
keyNameAccessKeyID: AWS_ACCESS_KEY_ID
keyNameAccessSecretKey: AWS_SECRET_ACCESS_KEY
secretName: s3-credentials