Creating a Postgres data store on Linux on IBM Z and LinuxONE
Install the Postgres operator and set up the data store.
- Before you begin
- Postgres Operator versions and image tags for deployment
- Installing Postgres online
- Installing Postgres offline
- Deploying and verifying Postgres (online and offline)
Before you begin
Make sure that you prepared your online and offline host to pull images from the external repository. Also, ensure that the correct Helm repo is added.
For more information, see Preparing for data store installation.
Installing Postgres online
Install the Zalando Postgres Operator in an online environment.
-
If you are using a Red Hat OpenShift cluster, create Security Context Constraints (SCC) before you deploy the Postgres Operator.
-
Create a YAML file, for example
postgres-scc.yaml
, with the SCC resource definition.apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: postgres-scc runAsUser: type: MustRunAs uid: 101 seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny allowHostDirVolumePlugin: false allowHostNetwork: true allowHostPorts: true allowPrivilegedContainer: false allowHostIPC: true allowHostPID: true readOnlyRootFilesystem: false users: - system:serviceaccount:instana-postgres:postgres-operator - system:serviceaccount:instana-postgres:postgres-pod - system:serviceaccount:instana-postgres:default
-
Create the SCC resource.
kubectl apply -f postgres-scc.yaml
-
-
Install the Postgres Operator.
-
Create the
instana-postgres
namespace.kubectl create namespace instana-postgres
-
Create image pull secrets for the
instana-postgres
namespace. In the command, replace<download_key>
value with your own agent key.kubectl create secret docker-registry instana-registry --namespace instana-postgres \ --docker-username=_ \ --docker-password=<download_key> \ --docker-server=artifact-public.instana.io
-
Create a file, for example
values.yaml
, with the Postgres configuration.configGeneral: kubernetes_use_configmaps: true securityContext: runAsUser: 101 image: registry: artifact-public.instana.io repository: self-hosted-images/z/ds-operator-images/postgres/postgres-operator tag: v1.10.0_v0.1.0 imagePullSecrets: - name: instana-registry configKubernetes: pod_service_account_definition: | apiVersion: v1 kind: ServiceAccount metadata: name: postgres-pod imagePullSecrets: - name: instana-registry podServiceAccount: name: postgres-pod
-
Install the Postgres Operator.
helm install postgres-operator instana/postgres-operator --version=v1.10.1 -f values.yaml -n instana-postgres
-
-
Create a file, for example
postgres.yaml
, with thepostgresql
resource definition.apiVersion: "acid.zalan.do/v1" kind: postgresql metadata: name: postgres spec: patroni: pg_hba: - local all all trust - host all all 0.0.0.0/0 trust - local replication standby trust - hostssl replication standby all trust - hostnossl all all all reject - hostssl all all all trust dockerImage: artifact-public.instana.io/self-hosted-images/z/ds-operator-images/postgres/spilo:3.0-p1_v0.1.0 teamId: instana numberOfInstances: 3 spiloRunAsUser: 101 spiloFSGroup: 103 spiloRunAsGroup: 103 postgresql: version: "15" parameters: # Expert section shared_buffers: "32MB" volume: size: 10Gi # Uncomment the following line to specify your own storage class, otherwise use the default. # storageClass: <REPLACE> resources: requests: cpu: 500m memory: 2Gi limits: cpu: 1000m memory: 4Gi
-
Create the
postgresql
resource. By default, Zalando creates thepostgres
user with a randomly generated password.kubectl apply -f postgres.yaml --namespace=instana-postgres
-
Retrieve the auto-generated password of the Zalando user.
kubectl get secret postgres.postgres.credentials.postgresql.acid.zalan.do -n instana-postgres --template='{{index .data.password | base64decode}}' && echo
-
Store the retrieved password in the
config.yaml
file.- Replace `<RETRIEVED_FROM_SECRET> variable value with the password that you got in the previous step.
- Replace
<username in the Postgres data store>
with the Zalando username.
datastoreConfigs: ... postgresConfigs: - user: <username in the Postgres data store> password: <RETRIEVED_FROM_SECRET> adminUser: <username in the Postgres data store> adminPassword: <RETRIEVED_FROM_SECRET> ...
-
Verify the Postgres Operator deployment.
kubectl get all -n instana-postgres
If the Postgres Operator is deployed successfully, then the result of the command is as follows:
NAME READY STATUS RESTARTS AGE pod/postgres-0 1/1 Running 0 100s pod/postgres-1 1/1 Running 0 69s pod/postgres-2 1/1 Running 0 41s pod/postgres-operator-766455c58c-ntmpf 1/1 Running 0 11m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/postgres ClusterIP 192.168.1.107 <none> 5432/TCP 101s service/postgres-operator ClusterIP 192.168.1.35 <none> 8080/TCP 11m service/postgres-repl ClusterIP 192.168.1.72 <none> 5432/TCP 101s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/postgres-operator 1/1 1 1 11m NAME DESIRED CURRENT READY AGE replicaset.apps/postgres-operator-766455c58c 1 1 1 11m NAME READY AGE statefulset.apps/postgres 3/3 103s NAME IMAGE CLUSTER-LABEL SERVICE-ACCOUNT MIN-INSTANCES AGE operatorconfiguration.acid.zalan.do/postgres-operator ghcr.io/zalando/spilo-15:3.0-p1 cluster-name postgres-pod -1 11m NAME TEAM VERSION PODS VOLUME CPU-REQUEST MEMORY-REQUEST AGE STATUS postgresql.acid.zalan.do/postgres instana 15 3 10Gi 500m 2Gi 106s Running
Installing Postgres offline
Install the Postgres Operator in an offline environment.
If you are using a Red Hat OpenShift cluster, create Security Context Constraints before you deploy the Postgres Operator.
-
Create a YAML file, for example
postgres-scc.yaml
, with the SCC resource definition.apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: postgres-scc runAsUser: type: MustRunAs uid: 101 seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny allowHostDirVolumePlugin: false allowHostNetwork: true allowHostPorts: true allowPrivilegedContainer: false allowHostIPC: true allowHostPID: true readOnlyRootFilesystem: false users: - system:serviceaccount:instana-postgres:postgres-operator - system:serviceaccount:instana-postgres:postgres-pod - system:serviceaccount:instana-postgres:default
-
Create the SCC resource.
kubectl apply -f postgres-scc.yaml
Then, proceed with installing the Postgres operator. Complete the steps for your platform.
If you didn't yet pull the Postgres images from the external registry when you prepared for installation, you can pull them now. Run the following commands on your bastion host. Then, copy the images to your Instana host that is in your air-gapped environment.
docker pull artifact-public.instana.io/self-hosted-images/z/ds-operator-images/postgres/postgres-operator:v1.10.0_v0.1.0
docker pull artifact-public.instana.io/self-hosted-images/z/ds-operator-images/postgres/spilo:3.0-p1_v0.1.0
Complete the following steps on your Instana host.
-
Retag the images to your internal image registry.
docker tag artifact-public.instana.io/self-hosted-images/z/ds-operator-images/postgres/postgres-operator:v1.10.0_v0.1.0 <internal-image-registry>/postgres/postgres-operator:v1.10.0_v0.1.0 docker tag artifact-public.instana.io/self-hosted-images/z/ds-operator-images/postgres/spilo:3.0-p1_v0.1.0 <internal-image-registry>/postgres/spilo:3.0-p1_v0.1.0
-
Push the images to your internal image registry on your bastion host.
docker push <internal-image-registry>/postgres/postgres-operator:v1.10.0_v0.1.0 docker push <internal-image-registry>/postgres/spilo:3.0-p1_v0.1.0
-
Create the
instana-postgres
namespace.kubectl create namespace instana-postgres
-
Optional: Create an image pull secret if your internal image registry needs authentication.
kubectl create secret docker-registry <secret_name> --namespace instana-postgres \ --docker-username=<registry_username> \ --docker-password=<registry_password> \ --docker-server=<internal-image-registry>:<internal-image-registry-port> --docker-email=<registry_email>
-
Create a file, for example
values.yaml
, with the Postgres configuration.configGeneral: kubernetes_use_configmaps: true securityContext: runAsUser: 101 image: registry: <internal-image-registry> # Optional: if you created an image pull secret for your internal registry, uncomment the following lines and update the image pull secret information. # imagePullSecrets: # - name: <internal-image-registry-pull-secret> repository: postgres/postgres-operator tag: v1.10.0_v0.1.0 configKubernetes: pod_service_account_definition: | apiVersion: v1 kind: ServiceAccount metadata: name: postgres-pod imagePullSecrets: - name: instana-registry podServiceAccount: name: postgres-pod
-
Install the Postgres Operator. If you created an image pull secret in the previous step, add
--set imagePullSecrets[0].name="<internal-image-registry-pull-secret>"
to the following command.helm install postgres-operator postgres-operator-v1.10.1.tgz --version=v1.10.1 -f values.yaml -n instana-postgres
-
Create a file, for example
postgres.yaml
, with thepostgresql
resource definition.apiVersion: "acid.zalan.do/v1" kind: postgresql metadata: name: postgres spec: patroni: pg_hba: - local all all trust - host all all 0.0.0.0/0 md5 - local replication standby trust - hostssl replication standby all md5 - hostnossl all all all reject - hostssl all all all md5 dockerImage: <internal-image-registry>/postgres/spilo:3.0-p1_v0.1.0 teamId: instana numberOfInstances: 3 spiloRunAsUser: 101 spiloFSGroup: 103 spiloRunAsGroup: 103 postgresql: version: "15" parameters: # Expert section shared_buffers: "32MB" volume: size: 10Gi # Uncomment the line below to specify your own storage class, otherwise use the default. # storageClass: <REPLACE> resources: requests: cpu: 500m memory: 2Gi limits: cpu: 1000m memory: 4Gi
-
Complete the steps in Deploying and verifying Postgres (online and offline).
Deploying and verifying Postgres (online and offline)
Complete these steps to deploy the Postgres instance and create the data store.
-
Create the
postgresql
resource.kubectl apply -f postgres.yaml --namespace=instana-postgres
-
If you used the Zalando operator, complete these steps:
-
Retrieve the auto-generated password of the Zalando user. By default, Zalando creates the
postgres
user with a randomly generated password.kubectl get secret postgres.postgres.credentials.postgresql.acid.zalan.do -n instana-postgres --template='{{index .data.password | base64decode}}' && echo
-
Store the retrieved password in the
config.yaml
file.- Replace `<RETRIEVED_FROM_SECRET> variable value with the password that you got in the previous step.
- Replace
<username in the Postgres data store>
with the Zalando username.
datastoreConfigs: ... postgresConfigs: - user: <username in the Postgres data store> password: <RETRIEVED_FROM_SECRET> adminUser: <username in the Postgres data store> adminPassword: <RETRIEVED_FROM_SECRET> ...
-
-
Verify the Postgres Operator deployment.
kubectl get all -n instana-postgres
If the Postgres Operator is deployed successfully, then the result of the command is as follows:
-
The following output is an example for Zalando operator:
NAME READY STATUS RESTARTS AGE pod/postgres-0 1/1 Running 0 100s pod/postgres-1 1/1 Running 0 69s pod/postgres-2 1/1 Running 0 41s pod/postgres-operator-766455c58c-ntmpf 1/1 Running 0 11m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/postgres ClusterIP 192.168.1.107 <none> 5432/TCP 101s service/postgres-operator ClusterIP 192.168.1.35 <none> 8080/TCP 11m service/postgres-repl ClusterIP 192.168.1.72 <none> 5432/TCP 101s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/postgres-operator 1/1 1 1 11m NAME DESIRED CURRENT READY AGE replicaset.apps/postgres-operator-766455c58c 1 1 1 11m NAME READY AGE statefulset.apps/postgres 3/3 103s NAME IMAGE CLUSTER-LABEL SERVICE-ACCOUNT MIN-INSTANCES AGE operatorconfiguration.acid.zalan.do/postgres-operator ghcr.io/zalando/spilo-15:3.0-p1 cluster-name postgres-pod -1 11m NAME TEAM VERSION PODS VOLUME CPU-REQUEST MEMORY-REQUEST AGE STATUS postgresql.acid.zalan.do/postgres instana 15 3 10Gi 500m 2Gi 106s Running
-
The following output is an example for CloudNativePG operator:
NAME READY STATUS RESTARTS AGE pod/postgres-1 1/1 Running 0 100s pod/postgres-2 1/1 Running 0 69s pod/postgres-3 1/1 Running 0 41s pod/cnpg-cloudnative-pg-64bbc87958-fqnrl 1/1 Running 0 11m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT service/cnpg-webhook-service ClusterIP 172.30.66.183 <none> 443/TCP service/postgres-r ClusterIP 172.30.163.146 <none> 5432/TCP service/postgres-ro ClusterIP 172.30.226.75 <none> 5432/TCP service/postgres-rw ClusterIP 172.30.235.178 <none> 5432/TCP NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/cnpg-cloudnative-p 1/1 1 1 11m NAME DESIRED CURRENT READY AGE replicaset.apps/cnpg-cloudnative-pg-64bbc87958 1 1 1 11m
-