Migrating data from Zalando to CloudNativePG on Linux x86_64 and Linux on IBM Z and LinuxONE clusters
You can transfer data from Zalando to
CloudNativePG by using the pg_basebackup bootstrap
mode within a cluster that is operating in replica mode. To
transfer the data, you need to create the CloudNativePG replica
cluster (target) that replicates the Zalando data store
(source).
Prerequisites
To bootstrap from a live cluster, complete the following steps:
- Ensure that the target and source have the same major PostgreSQL version.
- Set up the
streaming_replicauser with replication and login roles in the Zalando PostgreSQL database.
Modifying the Zalando Postgres data store for data migration
To modify the Zalando Postgres data store for data migration, complete the following steps:
-
Connect to the Zalando pod:
-
View the details of the primary pod:
kubectl get pods -o jsonpath={.items..metadata.name} -l application=spilo,spilo-role=master -n instana-postgres -
Run commands directly in the pod:
kubectl exec -it <primary_pod_name> -n instana-postgres -
Connect to the Postgres database:
psql -U postgres -
List the roles and create a
streaming_replicauser with replication and login roles in the Zalando database:\du CREATE ROLE streaming_replica WITH REPLICATION; ALTER ROLE streaming_replica WITH LOGIN PASSWORD '<password_retrived_from_zalando>'; -
Exit the PostgreSQL interactive terminal:
\q
-
-
Create two empty files that are called
custom.confandoverride.confwithin thepgdatadirectory on all pods, which are located alongside thepostgresql.conffile.-
List the pods:
kubectl get pods -n instana-postgres -
Run commands directly on the pod. Run the following command on all the pods:
kubectl exec -it <pod_name> -n instana-postgrescd /var/lib/postgresql/data/pgdata touch -f custom.conf touch -f override.conf
-
-
Exit from the pod terminal:
exit
Creating a Postgres data store by using the CloudNativePG Postgres Operator for data migration
Installing the Postgres Operator online
-
To deploy the CloudNativePG Postgres Operator online, complete the following steps:
-
Create the
instana-postgres-01namespace:kubectl create namespace instana-postgres-01 -
Determine the file system group ID on Red Hat OpenShift.
Red Hat OpenShift requires that file system groups are within a range of values specific to the namespace. On the cluster where the CloudNativePG Kubernetes Operator is deployed, run the following command:
kubectl get namespace instana-postgres-01 -o yamlAn output similar to the following example is shown for the command:
apiVersion: v1 kind: Namespace metadata: annotations: ....... openshift.io/sa.scc.uid-range: 1000750000/10000 creationTimestamp: "2024-01-14T07:04:59Z" labels: kubernetes.io/metadata.name: instana-postgres-01 ....... name: instana-postgres-01The
openshift.io/sa.scc.supplemental-groupsannotation contains the range of allowed IDs. The range1000750000/10000indicates 10,000 values that are starting with ID1000750000, so it specifies the range of IDs from1000750000to1000760000. In this example, the value1000750000might be used as a file system group ID. -
Install the CloudNativePG Postgres Operator by running the following Helm commands:
helm repo add instana https://artifact-public.instana.io/artifactory/rel-helm-customer-virtual --username=_ --password=<AGENT_KEY>helm repo updatehelm install cnpg instana/cloudnative-pg --set image.repository=artifact-public.instana.io/self-hosted-images/3rd-party/operator/cloudnative-pg --set image.tag=v1.26.1_v0.22.0 --version=0.25.0 --set imagePullSecrets[0].name=instana-registry --set containerSecurityContext.runAsUser=<UID from namespace> --set containerSecurityContext.runAsGroup=<UID from namespace> -n instana-postgres-01
-
-
Create image pull secrets for the
instana-postgres-01namespace:kubectl create secret docker-registry instana-registry -n instana-postgres-01 \ --docker-username=_ \ --docker-password=<AGENT_KEY> \ --docker-server=artifact-public.instana.ioNote: Before you create the secret, update the <AGENT_KEY> value with your own agent key. -
Create a file, such as
postgres-secret.yaml, for external cluster access:kind: Secret apiVersion: v1 metadata: name: instanaadmin type: Opaque stringData: username: instanaadmin password: <user_generated_password_from_zalando> -
Apply the
postgres-secret.yamlfile:kubectl apply -f postgres-secret.yaml -n instana-postgres-01 -
Create a CloudNativePG
Clusterresource inreplicamode:-
Create a file, such as
cnpg-postgres.yaml, as follows:apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: postgres spec: instances: 3 imageName: artifact-public.instana.io/self-hosted-images/3rd-party/cnpg-containers:15_v0.29.0 imagePullPolicy: IfNotPresent imagePullSecrets: - name: instana-registry enableSuperuserAccess: true replicationSlots: highAvailability: enabled: true managed: roles: - name: instanaadmin login: true superuser: true createdb: true createrole: true replication: true passwordSecret: name: instanaadmin postgresql: pg_hba: - local all all trust - host replication postgres all trust - host replication streaming_replica 0.0.0.0/0 trust - host all all 0.0.0.0/0 trust - local replication standby trust - hostssl replication standby all md5 - hostnossl all all all reject - hostssl all all all md5 bootstrap: pg_basebackup: source: zalando-postgres replica: enabled: true source: zalando-postgres externalClusters: - name: zalando-postgres connectionParameters: host: postgres.instana-postgres.svc user: postgres password: name: instanaadmin key: password superuserSecret: name: instanaadmin storage: size: 20Gi storageClass: nfs-client -
Apply the
cnpg-postgres.yamlfile by running the following command:kubectl apply -f cnpg-postgres.yaml -n instana-postgres-01
-
-
SSH into the debug container of the first CloudNativePG pod to modify
postgresql.conf.Following the initialization of the cluster in replica mode, the initial Pod (
postgres-1-pgbasebackup) status will beCompleted. Subsequent attempts to start the first CloudNativePG Pod (postgres-1) will fail. This is an expected behavior.To ensure successful initialization of the
Clusterand subsequent starting of the Pod, complete the following steps:-
Run the following commands to figure out the initial Pod, SSH into it, and go to the directory that contains the
pgdatavolume:kubectl debug pod/postgres-1 --as-root -n instana-postgres-01cd /var/lib/postgresql/data/pgdata/ -
Modify the
pg_hbaandpg_identpaths inside thepostgresql.conffile in each pod:-
Change the
pg_hbapath from/var/lib/postgresql/15/main/pg_hba.confto the following path:/var/lib/postgresql/data/pgdata/pg_hba.conf -
Change the
pg_identpath from/var/lib/postgresql/15/main/pg_ident.confto the following path:/var/lib/postgresql/data/pgdata/pg_ident.conf
-
-
Add
include 'custom.conf'andinclude 'override.conf'at the end of the file:echo "include 'custom.conf'" >> postgresql.conf echo "include 'override.conf'" >> postgresql.conf
-
-
Restart the pod. After the pod starts, all the pods replicate from the first pod.
-
Use the new cnpg-cluster.
-
Disable the replica cluster:
-
Modify the
cnpg-postgres.yamlfile:........ replica: enabled: false source: zalando-postgres .......... -
Reapply the
cnpg-postgres.yamlfile:kubectl apply -f cnpg-postgres.yaml -n instana-postgres-01
-
-
Connect to the database:
psql -U postgres -
Update the collation version:
ALTER DATABASE template1 REFRESH COLLATION VERSION;
-
-
Update the core spec configuration:
-
In your Instana Core file, update the
postgresConfigconfiguration as shown in the following example:..................... postgresConfigs: - authEnabled: true hosts: - postgres-rw.instana-postgres-01 ..................... -
Reapply the
core.yamlfile:kubectl apply -f core.yaml -n instana-core
-