Creating a Postgres data store on Linux on Power (ppc64le)
Install the Postgres operator and set up the data store.
Before you begin
Make sure that you prepared your online and offline host to pull images from the external repository. Also, ensure that the correct Helm repo is added.
For more information, see Preparing for data store installation.
Installing Postgres online
Install the Postgres operator in an online environment.
Installing Postgres by using the CloudNativePG operator
Complete these steps to install the Postgres data store.
-
Create the
instana-postgres
namespace.kubectl create namespace instana-postgres
-
Create image pull secrets for the
instana-postgres
namespace.kubectl create secret docker-registry instana-registry --namespace instana-postgres \ --docker-username=_ \ --docker-password=<download_key> \ --docker-server=artifact-public.instana.io
-
If you are installing Postgres on a Red Hat® OpenShift® cluster, determine the file system group ID from the
instana-postgres
namespace. Red Hat OpenShift requires that file system groups are within a range of values specific to the namespace.kubectl get namespace instana-postgres -o yaml
An output similar to the following example is shown for the command:
apiVersion: v1 kind: Namespace metadata: annotations: ....... openshift.io/sa.scc.uid-range: 1000750000/10000 creationTimestamp: "2024-01-14T07:04:59Z" labels: kubernetes.io/metadata.name: instana-postgres ....... name: instana-postgres
The
openshift.io/sa.scc.supplemental-groups
annotation contains the range of allowed IDs. The range1000750000/10000
indicates 10,000 values starting with ID1000750000
, so it specifies the range of IDs from1000750000
to1000760000
. In this example, the value1000750000
might be used as a file system group ID (UID). -
Install the Postgres operator. Use the UID from the previous step as the
<UID from namespace>
in the following command:helm install cnpg instana/cloudnative-pg --set image.repository=artifact-public.instana.io/self-hosted-images/3rd-party/operator/cloudnative-pg --set image.tag=v1.21.1_v0.9.0 --version=0.20.0 --set imagePullSecrets[0].name=instana-registry --set containerSecurityContext.runAsUser=<UID from namespace> --set containerSecurityContext.runAsGroup=<UID from namespace> -n instana-postgres
-
Generate a password in base64. Make a note of the password. You need to store this later in the
config.yaml
file.openssl rand -base64 24 | tr -cd 'a-zA-Z0-9' | head -c32; echo
-
Create a resource of type
Secret
by using the password that you got in the previous command.kind: Secret apiVersion: v1 metadata: name: instanaadmin type: Opaque stringData: username: instanaadmin password: <user-generate-password>
-
Create the Postgres secret.
kubectl apply -f postgres-secret.yaml -n instana-postgres
-
Create a YAML file, for example
postgres.yaml
, with the Postgres cluster configuration.apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: postgres spec: instances: 3 imageName: artifact-public.instana.io/self-hosted-images/3rd-party/datastore/cnpg-containers:15_v0.11.0 imagePullPolicy: IfNotPresent imagePullSecrets: - name: instana-registry postgresql: parameters: shared_buffers: 32MB pg_stat_statements.track: all auto_explain.log_min_duration: '10s' pg_hba: - local all all trust - host all all 0.0.0.0/0 md5 - local replication standby trust - hostssl replication standby all md5 - hostnossl all all all reject - hostssl all all all md5 managed: roles: - name: instanaadmin login: true superuser: true createdb: true createrole: true passwordSecret: name: instanaadmin bootstrap: initdb: database: instanaadmin owner: instanaadmin secret: name: instanaadmin superuserSecret: name: instanaadmin storage: size: 1Gi # storageClass: "Optional"
-
Complete the steps in Deploying and verifying Postgres (online and offline).
Installing Postgres by using the Zalando operator
To deploy the Zalando Postgres operator online, complete the following steps:
-
If you are using a Red Hat OpenShift cluster, create Security Context Constraints (SCC) before you deploy the Postgres operator.
-
Create a YAML file, for example
postgres-scc.yaml
, with the SCC resource definition.apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: postgres-scc runAsUser: type: MustRunAs uid: 101 seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny allowHostDirVolumePlugin: false allowHostNetwork: true allowHostPorts: true allowPrivilegedContainer: false allowHostIPC: true allowHostPID: true readOnlyRootFilesystem: false users: - system:serviceaccount:instana-postgres:postgres-operator - system:serviceaccount:instana-postgres:postgres-pod - system:serviceaccount:instana-postgres:default
-
Create the SCC resource.
kubectl apply -f postgres-scc.yaml
-
-
Install the Postgres operator.
-
Create the
instana-postgres
namespace.kubectl create namespace instana-postgres
-
Create image pull secrets for the
instana-postgres
namespace. In the following command, replace<download_key>
value with your agent key.kubectl create secret docker-registry instana-registry --namespace instana-postgres \ --docker-username=_ \ --docker-password=<download_key> \ --docker-server=artifact-public.instana.io
-
Create a file, for example
values.yaml
, with the Postgres configuration.configGeneral: kubernetes_use_configmaps: true securityContext: runAsUser: 101 image: registry: artifact-public.instana.io repository: self-hosted-images/3rd-party/operator/zalando tag: v1.10.0_v0.1.0 imagePullSecrets: - name: instana-registry configKubernetes: pod_service_account_definition: | apiVersion: v1 kind: ServiceAccount metadata: name: postgres-pod imagePullSecrets: - name: instana-registry podServiceAccount: name: postgres-pod
-
Install the Postgres operator.
helm install postgres-operator instana/postgres-operator --version=v1.10.1 -f values.yaml -n instana-postgres
-
-
Create a file, for example
postgres.yaml
, with thepostgresql
resource definition.apiVersion: "acid.zalan.do/v1" kind: postgresql metadata: name: postgres spec: patroni: pg_hba: - local all all trust - host all all 0.0.0.0/0 trust - local replication standby trust - hostssl replication standby all trust - hostnossl all all all reject - hostssl all all all trust dockerImage: artifact-public.instana.io/self-hosted-images/3rd-party/datastore/zalando:15.7_v0.1.0 teamId: instana numberOfInstances: 3 postgresql: version: "15" parameters: # Expert section shared_buffers: "32MB" volume: size: 10Gi # Uncomment the following line to specify your own storage class, otherwise use the default. # storageClass: <REPLACE>
- Complete the steps in Deploying and verifying Postgres (online and offline).
Installing Postgres offline
Install the Postgres operator in an offline environment.
Installing Postgres offline by using the CloudNative PG operator
If you didn't yet pull the Postgres images from the external registry when you prepared for installation, you can pull them now. Run the following commands on your bastion host. Then, copy the images to your Instana host that is in your air-gapped environment.
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/operator/cloudnative-pg:v1.21.1_v0.9.0
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/datastore/cnpg-containers:15_v0.11.0
Complete the following steps on your Instana host.
-
Retag the images to your internal image registry.
docker tag artifact-public.instana.io/self-hosted-images/3rd-party/operator/cloudnative-pg:v1.21.1_v0.9.0 <internal-image-registry>/operator/cloudnative-pg:v1.21.1_v0.9.0 docker tag artifact-public.instana.io/self-hosted-images/3rd-party/datastore/cnpg-containers:15_v0.11.0 <internal-image-registry>/datastore/cnpg-containers:15_v0.11.0
-
Push the images to your internal image registry on your bastion host.
docker push <internal-image-registry>/operator/cloudnative-pg:v1.21.1_v0.9.0 docker push <internal-image-registry>/datastore/cnpg-containers:15_v0.11.0
-
Create the
instana-postgres
namespace.kubectl create namespace instana-postgres
-
Optional: Create an image pull secret if your internal image registry needs authentication.
kubectl create secret docker-registry <secret_name> --namespace instana-postgres \ --docker-username=<registry_username> \ --docker-password=<registry_password> \ --docker-server=<internal-image-registry>:<internal-image-registry-port> \ --docker-email=<registry_email>
-
If you are installing Postgres on a Red Hat® OpenShift® cluster, determine the file system group ID from the
instana-postgres
namespace. Red Hat OpenShift requires that file system groups are within a range of values specific to the namespace.kubectl get namespace instana-postgres -o yaml
An output similar to the following example is shown for the command:
apiVersion: v1 kind: Namespace metadata: annotations: ....... openshift.io/sa.scc.uid-range: 1000750000/10000 creationTimestamp: "2024-01-14T07:04:59Z" labels: kubernetes.io/metadata.name: instana-postgres ....... name: instana-postgres
The
openshift.io/sa.scc.supplemental-groups
annotation contains the range of allowed IDs. The range1000750000/10000
indicates 10,000 values starting with ID1000750000
, so it specifies the range of IDs from1000750000
to1000760000
. In this example, the value1000750000
might be used as a file system group ID (UID). -
Install the Postgres operator. In the following command, update the <download_key> value with your own agent key. If you created an image pull secret for your internal registry, add
--docker-server=<internal-image-registry>:<internal-image-registry-port>
to the following command.-
Red Hat OpenShift cluster
Use the UID from the previous step as the
<UID from namespace>
in the following commands:helm install cnpg cloudnative-pg-1.20.0.tgz --set image.repository=image-registry.openshift-image-registry.svc:5000/instana-postgres/cloudnative-pg-operator --set image.tag=v1.21.1_v0.9.0 --version=0.20.0 --set containerSecurityContext.runAsUser=<UID from namespace> --set containerSecurityContext.runAsGroup=<UID from namespace> -n instana-postgres
-
Kubernetes cluster
helm install cnpg cloudnative-pg-operator --set image.repository=<internal-image-registry>/operator/cloudnative-pg --set image.tag=v1.21.1_v0.9.0 --version=0.20.0 -n instana-postgres
-
-
Generate a password in base64. Make a note of the password. You need to store this later in the
config.yaml
file.openssl rand -base64 24 | tr -cd 'a-zA-Z0-9' | head -c32; echo
-
Create a resource of type
Secret
by using the password that you got in the previous command.kind: Secret apiVersion: v1 metadata: name: instanaadmin type: Opaque stringData: username: instanaadmin password: <user-generate-password>
-
Create the Postgres secret.
kubectl apply -f postgres-secret.yaml -n instana-postgres
-
Create a YAML file, for example
postgres.yaml
, with the Postgres cluster configuration.
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: postgres
spec:
instances: 3
imageName: <internal-image-registry>/datastore/cnpg-containers:15_v0.11.0
imagePullPolicy: IfNotPresent
# Optional: if you created an image pull secret for your internal registry, uncomment the following lines and update the image pull secret information.
# imagePullSecrets:
# - name: <internal-image-registry-pull-secret>
postgresql:
parameters:
shared_buffers: 32MB
pg_stat_statements.track: all
auto_explain.log_min_duration: '10s'
pg_hba:
- local all all trust
- host all all 0.0.0.0/0 md5
- local replication standby trust
- hostssl replication standby all md5
- hostnossl all all all reject
- hostssl all all all md5
managed:
roles:
- name: instanaadmin
login: true
superuser: true
createdb: true
createrole: true
passwordSecret:
name: instanaadmin
bootstrap:
initdb:
database: instanaadmin
owner: instanaadmin
secret:
name: instanaadmin
superuserSecret:
name: instanaadmin
storage:
size: 1Gi
# storageClass: "Optional"
- Complete the steps in Deploying and verifying Postgres (online and offline).
Installing Postgres offline by using the Zalando operator
If you are using a Red Hat OpenShift cluster, create Security Context Constraints before you deploy the Postgres operator.
-
Create a YAML file, for example
postgres-scc.yaml
, with the SCC resource definition.apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: postgres-scc runAsUser: type: MustRunAs uid: 101 seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny allowHostDirVolumePlugin: false allowHostNetwork: true allowHostPorts: true allowPrivilegedContainer: false allowHostIPC: true allowHostPID: true readOnlyRootFilesystem: false users: - system:serviceaccount:instana-postgres:postgres-operator - system:serviceaccount:instana-postgres:postgres-pod - system:serviceaccount:instana-postgres:default
-
Create the SCC resource.
kubectl apply -f postgres-scc.yaml
Then, proceed with installing the Postgres operator.
If you didn't yet pull the Postgres images from the external registry when you prepared for installation, you can pull them now. Run the following commands on your bastion host. Then, copy the images to your Instana host that is in your air-gapped environment.
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/operator/zalando:v1.10.0_v0.1.0
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/datastore/zalando:15.7_v0.1.0
Complete the following steps on your Instana host.
-
Retag the images to your internal image registry.
docker tag artifact-public.instana.io/self-hosted-images/3rd-party/operator/zalando:v1.10.0_v0.1.0 <internal-image-registry>/self-hosted-images/3rd-party/operator/zalando:v1.10.0_v0.1.0 docker tag artifact-public.instana.io/self-hosted-images/3rd-party/datastore/zalando:15.7_v0.1.0 <internal-image-registry>/self-hosted-images/3rd-party/datastore/zalando:15.7_v0.1.0
-
Push the images to your internal image registry on your bastion host.
docker push <internal-image-registry>/self-hosted-images/3rd-party/operator/zalando:v1.10.0_v0.1.0 docker push <internal-image-registry>/self-hosted-images/3rd-party/datastore/zalando:15.7_v0.1.0
-
Create a file, for example
values.yaml
, with the Postgres configuration.configGeneral: kubernetes_use_configmaps: true securityContext: runAsUser: 101 image: registry: <internal-image-registry> repository: self-hosted-images/3rd-party/operator/zalando tag: v1.10.0_v0.1.0 imagePullSecrets: - name: instana-registry configKubernetes: pod_service_account_definition: | apiVersion: v1 kind: ServiceAccount metadata: name: postgres-pod imagePullSecrets: - name: instana-registry podServiceAccount: name: postgres-pod
-
Create the
instana-postgres
namespace.kubectl create namespace instana-postgres
-
Optional: Create an image pull secret if your internal image registry needs authentication.
kubectl create secret docker-registry <secret_name> --namespace instana-postgres \ --docker-username=<registry_username> \ --docker-password=<registry_password> \ --docker-server=<internal-image-registry>:<internal-image-registry-port> \ --docker-email=<registry_email>
-
Install the Postgres operator. If you created an image pull secret in the previous step, add
--set imagePullSecrets[0].name="<internal-image-registry-pull-secret>"
to the following command.helm install postgres-operator postgres-operator-ppc64le-1.10.0.tgz --version=1.10.0 --set configGeneral.kubernetes_use_configmaps=true --set securityContext.runAsUser=101 --namespace=instana-postgres --set image.registry=<internal-image-registry> --set image.repository=ppc64le-oss/postgres-operator-ppc64le --set image.tag=v1.10.0_v0.1.0
-
Create a file, for example
postgres.yaml
, with thepostgresql
resource definition.apiVersion: "acid.zalan.do/v1" kind: postgresql metadata: name: postgres spec: patroni: pg_hba: - local all all trust - host all all 0.0.0.0/0 trust - local replication standby trust - hostssl replication standby all trust - hostnossl all all all reject - hostssl all all all trust dockerImage: <internal-image-registry>/self-hosted-images/3rd-party/datastore/zalando:15.7_v0.1.0 teamId: instana numberOfInstances: 3 postgresql: version: "15" parameters: # Expert section shared_buffers: "32MB" volume: size: 10Gi # Uncomment the line below to specify your own storage class, otherwise use the default. # storageClass: <REPLACE>
-
Complete the steps in Deploying and verifying Postgres (online and offline).
Deploying and verifying Postgres (online and offline)
Complete these steps to deploy the Postgres instance and create the data store.
-
Create the
postgresql
resource.kubectl apply -f postgres.yaml --namespace=instana-postgres
-
If you used the CNPG operator, complete these steps:
-
Store the password that you generated before in the
config.yaml
file.datastoreConfigs: ... postgresConfigs: - user: instanaadmin password: <USER_GENERATED_PASSWORD> adminUser: instanaadmin adminPassword: <USER_GENERATED_PASSWORD> ...
-
-
If you used the Zalando operator, complete these steps:
-
Retrieve the auto-generated password of the Zalando user. By default, Zalando creates the
postgres
user with a randomly generated password.kubectl get secret postgres.postgres.credentials.postgresql.acid.zalan.do -n instana-postgres --template='{{index .data.password | base64decode}}' && echo
-
Store the retrieved password in the
config.yaml
file.- Replace `<RETRIEVED_FROM_SECRET> variable value with the password that you got in the previous step.
- Replace
<username in the Postgres data store>
with the Zalando username.
datastoreConfigs: ... postgresConfigs: - user: <username in the Postgres data store> password: <RETRIEVED_FROM_SECRET> adminUser: <username in the Postgres data store> adminPassword: <RETRIEVED_FROM_SECRET> ...
-
-
Verify the Postgres operator deployment.
kubectl get all -n instana-postgres
If the Postgres operator is deployed successfully, then the result of the command is as follows:
-
The following output is an example for Zalando operator:
NAME READY STATUS RESTARTS AGE pod/postgres-0 1/1 Running 0 100s pod/postgres-1 1/1 Running 0 69s pod/postgres-2 1/1 Running 0 41s pod/postgres-operator-766455c58c-ntmpf 1/1 Running 0 11m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/postgres ClusterIP 192.168.1.107 <none> 5432/TCP 101s service/postgres-operator ClusterIP 192.168.1.35 <none> 8080/TCP 11m service/postgres-repl ClusterIP 192.168.1.72 <none> 5432/TCP 101s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/postgres-operator 1/1 1 1 11m NAME DESIRED CURRENT READY AGE replicaset.apps/postgres-operator-766455c58c 1 1 1 11m NAME READY AGE statefulset.apps/postgres 3/3 103s NAME IMAGE CLUSTER-LABEL SERVICE-ACCOUNT MIN-INSTANCES AGE operatorconfiguration.acid.zalan.do/postgres-operator ghcr.io/zalando/spilo-15:3.0-p1 cluster-name postgres-pod -1 11m NAME TEAM VERSION PODS VOLUME CPU-REQUEST MEMORY-REQUEST AGE STATUS postgresql.acid.zalan.do/postgres instana 15 3 10Gi 500m 2Gi 106s Running
-
The following output is an example for CloudNativePG operator:
NAME READY STATUS RESTARTS AGE pod/postgres-1 1/1 Running 0 100s pod/postgres-2 1/1 Running 0 69s pod/postgres-3 1/1 Running 0 41s pod/cnpg-cloudnative-pg-64bbc87958-fqnrl 1/1 Running 0 11m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT service/cnpg-webhook-service ClusterIP 172.30.66.183 <none> 443/TCP service/postgres-r ClusterIP 172.30.163.146 <none> 5432/TCP service/postgres-ro ClusterIP 172.30.226.75 <none> 5432/TCP service/postgres-rw ClusterIP 172.30.235.178 <none> 5432/TCP NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/cnpg-cloudnative-p 1/1 1 1 11m NAME DESIRED CURRENT READY AGE replicaset.apps/cnpg-cloudnative-pg-64bbc87958 1 1 1 11m
-
Migrating data from Zalando to CloudNativePG
By using the pg_basebackup
bootstrap mode, you can create a new PostgreSQL cluster (target) that replicates the exact physical state of an existing PostgreSQL instance (source).
You can bootstrap from a live cluster through a valid streaming replication connection, and by using the source PostgreSQL instance either as a primary or a standby PostgreSQL server.
To migrate the data from a Zalando PostgreSQL cluster, through the pg_basebackup
bootstrap mode, to a CloudNativePG replica cluster, see Migrating data from Zalando to CloudNativePG.