Back up and restore Postgres data.
Before you begin
Before you back up and restore Postgres data, you need a functioning Amazon Simple Storage
Service (S3) Object Storage system in place.
- For demonstrations or test clusters, you can use the object storage that is provided by Rook Ceph®. For more information, see Create a Local Object Store.
- For production clusters, contact Red Hat® OpenShift® and Kubernetes
subject matter experts within your organization to determine an appropriate S3 object storage
solution.
If you use EnterpriseDB (EDB) Postgres, before you use backup and restore features, delete and
re-create one of the custom resource definitions (CRDs) for EDB.
-
Log in to your cluster as an admin user and enter the following command:
oc delete crd backups.postgresql.k8s.enterprisedb.io
-
Download this
txt file and save it as a YAML file with the name
backups.crd.yaml.
-
Run the following command:
oc create -f backups.crd.yaml
Procedure
-
Create or identify an S3 bucket credentials secret.
- Decide on a key name for each S3 credential value. In a following step, these keys are
put into the S3 credential secret that IBM®
Netcool® Operations Insight® then
provides to EDB Postgres along with the actual values that you provide.
- Create a key with a value that corresponds to your S3 access key ID, for example, a key named
AWS_ACCESS_KEY_ID
.
- Create a key with a value that corresponds to your S3 secret access key, for example, a key
named
AWS_SECRET_ACCESS_KEY
.
- Create a key for your S3 access session token, for example, a key named
AWS_ACCESS_SESSION_TOKEN
.
- Base64 encode each of the keys to find the values.
For example, if your S3
access key ID is AWS_ACCESS_KEY_ID
, obtain the Base64 encoded value by running the
following command:
echo "AWS_ACCESS_KEY_ID" | base64
Running this command returns encoded data, the value,
am9obnNtaXRoCg==
. Complete this step for each of your S3 keys.
- Create a file that is named s3-credentials.yaml and include the
following data.
- Add the Base64 encoded S3 credential data from the previous step. Use the data that is
particular to your S3 bucket.
- The following example doesn't require an S3 access session token to use the S3 bucket.
- Provide the namespace name of your Netcool Operations Insight
custom resource. The following example uses a
netcool
namespace.
apiVersion: v1
data:
AWS_ACCESS_KEY_ID: am9obnNtaXRoCg==
AWS_SECRET_ACCESS_KEY: VHVya2V5SGFtUm9hc3RCZWVmU2hhcnBDaGVkZGFyU291cmRvdWdoQnJlYWQxMjM0Cg==
kind: Secret
metadata:
name: s3-credentials
namespace: netcool
- Create the secret with the following command:
oc create -f s3-credentials.yaml
- After your S3 credential secret is created, keep the name of the secret and the names of
each of the keys. The values are provided in the CRD in this
txt file.