Creating a Postgres data store on Linux x86_64
Install the Postgres operator and set up the data store.
- Before you begin
- Postgres Operator versions and image tags for deployment
- Installing Postgres online
- Installing Postgres offline
- Deploying and verifying Postgres (online and offline)
- Migrating data from Zalando to CloudNativePG
Before you begin
Make sure that you prepared your online and offline host to pull images from the external repository. Also, ensure that the correct Helm repo is added.
For more information, see Preparing for data store installation.
Installing Postgres online
Install the Postgres Operator in an online environment.
Installing Postgres by using the CloudNativePG operator
Complete these steps to install the Postgres data store.
-
Create the
instana-postgres
namespace.kubectl create namespace instana-postgres
-
Create image pull secrets for the
instana-postgres
namespace.kubectl create secret docker-registry instana-registry --namespace instana-postgres \ --docker-username=_ \ --docker-password=<download_key> \ --docker-server=artifact-public.instana.io
-
If you are installing Postgres on a Red Hat® OpenShift® cluster, determine the file system group ID from the
instana-postgres
namespace. Red Hat OpenShift requires that file system groups are within a range of values specific to the namespace.kubectl get namespace instana-postgres -o yaml
An output similar to the following example is shown for the command:
apiVersion: v1 kind: Namespace metadata: annotations: ....... openshift.io/sa.scc.uid-range: 1000750000/10000 creationTimestamp: "2024-01-14T07:04:59Z" labels: kubernetes.io/metadata.name: instana-postgres ....... name: instana-postgres
The
openshift.io/sa.scc.supplemental-groups
annotation contains the range of allowed IDs. The range1000750000/10000
indicates 10,000 values starting with ID1000750000
, so it specifies the range of IDs from1000750000
to1000760000
. In this example, the value1000750000
might be used as a file system group ID (UID). -
Install the Postgres Operator. Use the UID from the previous step as the
<UID from namespace>
in the following command:helm install cnpg instana/cloudnative-pg --set image.repository=artifact-public.instana.io/self-hosted-images/3rd-party/operator/cloudnative-pg --set image.tag=v1.21.1_v0.6.0 --version=0.20.0 --set imagePullSecrets[0].name=instana-registry --set containerSecurityContext.runAsUser=<UID from namespace> --set containerSecurityContext.runAsGroup=<UID from namespace> -n instana-postgres
-
Generate a password in base64. Make a note of the password. You need to store this later in the
config.yaml
file.openssl rand -base64 24 | tr -cd 'a-zA-Z0-9' | head -c32; echo
-
Create a resource of type
Secret
by using the password that you got in the previous command.kind: Secret apiVersion: v1 metadata: name: instanaadmin type: Opaque stringData: username: instanaadmin password: <user-generate-password>
-
Create the Postgres secret.
kubectl apply -f postgres-secret.yaml -n instana-postgres
-
Create a YAML file, for example
postgres.yaml
, with the Postgres cluster configuration.apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: postgres spec: instances: 3 imageName: artifact-public.instana.io/self-hosted-images/3rd-party/datastore/cnpg-containers:15_v0.8.0 imagePullPolicy: IfNotPresent imagePullSecrets: - name: instana-registry postgresql: parameters: shared_buffers: 32MB pg_stat_statements.track: all auto_explain.log_min_duration: '10s' pg_hba: - local all all trust - host all all 0.0.0.0/0 md5 - local replication standby trust - hostssl replication standby all md5 - hostnossl all all all reject - hostssl all all all md5 managed: roles: - name: instanaadmin login: true superuser: true createdb: true createrole: true passwordSecret: name: instanaadmin bootstrap: initdb: database: instanaadmin owner: instanaadmin secret: name: instanaadmin superuserSecret: name: instanaadmin storage: size: 1Gi # storageClass: "Optional"
-
Deploy the Postgres cluster.
kubectl apply -f postgres.yaml -n instana-postgres
-
Store the password that you generated earlier in the
config.yaml
file.
datastoreConfigs:
...
postgresConfigs:
- user: instanaadmin
password: <USER_GENERATED_PASSWORD>
adminUser: instanaadmin
adminPassword: <USER_GENERATED_PASSWORD>
...
- Complete the steps in Deploying and verifying Postgres (online and offline).
Installing Postgres offline
Install the Postgres Operator in an offline environment.
Installing Postgres offline by using the CloudNative PG operator
If you didn't yet pull the Postgres images from the external registry when you prepared for installation, you can pull them now. Run the following commands on your bastion host. Then, copy the images to your Instana host that is in your air-gapped environment.
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/operator/cloudnative-pg:v1.21.1_v0.6.0
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/datastore/cnpg-containers:15
Complete the following steps on your Instana host.
-
Retag the images to your internal image registry.
docker tag artifact-public.instana.io/self-hosted-images/3rd-party/operator/cloudnative-pg:v1.21.1_v0.6.0 <internal-image-registry>/operator/cloudnative-pg:v1.21.1_v0.6.0 docker tag artifact-public.instana.io/self-hosted-images/3rd-party/datastore/cnpg-containers:15 <internal-image-registry>/datastore/cnpg-containers:15
-
Push the images to your internal image registry on your bastion host.
docker push <internal-image-registry>/operator/cloudnative-pg:v1.21.1_v0.6.0 docker push <internal-image-registry>/datastore/cnpg-containers:15
-
Create the
instana-postgres
namespace.kubectl create namespace instana-postgres
-
Optional: Create an image pull secret if your internal image registry needs authentication.
kubectl create secret docker-registry <secret_name> --namespace instana-postgres \ --docker-username=<registry_username> \ --docker-password=<registry_password> \ --docker-server=<internal-image-registry>:<internal-image-registry-port> \ --docker-email=<registry_email>
-
If you are installing Postgres on a Red Hat® OpenShift® cluster, determine the file system group ID from the
instana-postgres
namespace. Red Hat OpenShift requires that file system groups are within a range of values specific to the namespace.kubectl get namespace instana-postgres -o yaml
An output similar to the following example is shown for the command:
apiVersion: v1 kind: Namespace metadata: annotations: ....... openshift.io/sa.scc.uid-range: 1000750000/10000 creationTimestamp: "2024-01-14T07:04:59Z" labels: kubernetes.io/metadata.name: instana-postgres ....... name: instana-postgres
The
openshift.io/sa.scc.supplemental-groups
annotation contains the range of allowed IDs. The range1000750000/10000
indicates 10,000 values starting with ID1000750000
, so it specifies the range of IDs from1000750000
to1000760000
. In this example, the value1000750000
might be used as a file system group ID (UID). -
Install the Postgres Operator. In the following command, update the <download_key> value with your own agent key. If you created an image pull secret for your internal registry, add
--docker-server=<internal-image-registry>:<internal-image-registry-port>
to the following command.-
Red Hat OpenShift cluster
Use the UID from the previous step as the
<UID from namespace>
in the following commands:helm install cnpg cloudnative-pg-1.21.0.tgz --set image.repository=image-registry.openshift-image-registry.svc:5000/instana-postgres/cloudnative-pg-operator --set image.tag=v1.21.1_v0.6.0 --version=0.20.0 --set containerSecurityContext.runAsUser=<UID from namespace> --set containerSecurityContext.runAsGroup=<UID from namespace> -n instana-postgres
-
Kubernetes cluster
helm install cnpg cloudnative-pg-operator --set image.repository=<internal-image-registry>/operator/cloudnative-pg --set image.tag=v1.21.1_v0.6.0 --version=0.20.0 -n instana-postgres
-
-
Generate a password in base64. Make a note of the password. You need to store this later in the
config.yaml
file.openssl rand -base64 24 | tr -cd 'a-zA-Z0-9' | head -c32; echo
-
Create a resource of type
Secret
by using the password that you got in the previous command.kind: Secret apiVersion: v1 metadata: name: instanaadmin type: Opaque stringData: username: instanaadmin password: <user-generate-password>
-
Create the Postgres secret.
kubectl apply -f postgres-secret.yaml -n instana-postgres
-
Create a YAML file, for example
postgres.yaml
, with the Postgres cluster configuration.
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: postgres
spec:
instances: 3
imageName: <internal-image-registry>/datastore/cnpg-containers:15_v0.8.0
imagePullPolicy: IfNotPresent
# Optional: if you created an image pull secret for your internal registry, uncomment the following lines and update the image pull secret information.
# imagePullSecrets:
# - name: <internal-image-registry-pull-secret>
postgresql:
parameters:
shared_buffers: 32MB
pg_stat_statements.track: all
auto_explain.log_min_duration: '10s'
pg_hba:
- local all all trust
- host all all 0.0.0.0/0 md5
- local replication standby trust
- hostssl replication standby all md5
- hostnossl all all all reject
- hostssl all all all md5
managed:
roles:
- name: instanaadmin
login: true
superuser: true
createdb: true
createrole: true
passwordSecret:
name: instanaadmin
bootstrap:
initdb:
database: instanaadmin
owner: instanaadmin
secret:
name: instanaadmin
superuserSecret:
name: instanaadmin
storage:
size: 1Gi
# storageClass: "Optional"
- Deploy the Postgres cluster.
kubectl apply -f postgres.yaml -n instana-postgres
- Store the password that you generated earlier in the
config.yaml
file.
datastoreConfigs:
...
postgresConfigs:
- user: instanaadmin
password: <USER_GENERATED_PASSWORD>
adminUser: instanaadmin
adminPassword: <USER_GENERATED_PASSWORD>
...
- Complete the steps in Deploying and verifying Postgres (online and offline).
Deploying and verifying Postgres (online and offline)
Complete these steps to deploy the Postgres instance and create the data store.
-
Create the
postgresql
resource.kubectl apply -f postgres.yaml --namespace=instana-postgres
-
Verify the Postgres Operator deployment.
kubectl get all -n instana-postgres
If the Postgres Operator is deployed successfully, then the result of the command is as follows:
NAME READY STATUS RESTARTS AGE pod/postgres-1 1/1 Running 0 100s pod/postgres-2 1/1 Running 0 69s pod/postgres-3 1/1 Running 0 41s pod/cnpg-cloudnative-pg-64bbc87958-fqnrl 1/1 Running 0 11m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT service/cnpg-webhook-service ClusterIP 172.30.66.183 <none> 443/TCP service/postgres-r ClusterIP 172.30.163.146 <none> 5432/TCP service/postgres-ro ClusterIP 172.30.226.75 <none> 5432/TCP service/postgres-rw ClusterIP 172.30.235.178 <none> 5432/TCP NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/cnpg-cloudnative-p 1/1 1 1 11m NAME DESIRED CURRENT READY AGE replicaset.apps/cnpg-cloudnative-pg-64bbc87958 1 1 1 11m
Migrating data from Zalando to CloudNativePG
By using the pg_basebackup
bootstrap mode, you can create a new PostgreSQL cluster (target) that replicates the exact physical state of an existing PostgreSQL instance (source).
You can bootstrap from a live cluster through a valid streaming replication connection, and by using the source PostgreSQL instance either as a primary or a standby PostgreSQL server.
To migrate the data from a Zalando PostgreSQL cluster, through the pg_basebackup
bootstrap mode, to a CloudNativePG replica cluster, see Migrating data from Zalando to CloudNativePG.