Updating the Postgres data store on Linux on IBM Z and LinuxONE
Upgrade the Postgres operator.
Before you begin
Make sure that you prepared your online and offline host to pull images from the external repository. Also, ensure that the correct Helm repo is added.
Postgres Operator versions and image tags for deployment
The following images are needed for the pinned Helm chart or operator versions.
| Platform | Operator versions | Helm chart version | Image with tag |
|---|---|---|---|
| Linux® x86_64 | 1.28.1 | 0.27.1 |
artifact-public.instana.io/self-hosted-images/3rd-party/operator/cloudnative-pg:v1.28.1_v0.32.0 artifact-public.instana.io/self-hosted-images/3rd-party/datastore/cnpg-containers:15_v0.37.0 |
Upgrading Postgres online by using the CloudNativePG operator
Complete these steps to upgrade the Postgres Operator in an online environment.
-
Create
custom_values.yamland specify the toleration and affinity. Skip this step if the file is already created.tolerations: - key: node.instana.io/monitor operator: Equal effect: NoSchedule value: "true" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/monitor operator: In values: - "true" -
Upgrade the Postgres Operator. Use the UID from the previous step as the
<UID from namespace>in the following command:helm upgrade --install cnpg instana/cloudnative-pg --set image.repository=artifact-public.instana.io/self-hosted-images/3rd-party/operator/cloudnative-pg --set image.tag=v1.28.1_v0.32.0 --version=0.27.1 --set imagePullSecrets[0].name=instana-registry --set containerSecurityContext.runAsUser=<UID from namespace> --set containerSecurityContext.runAsGroup=<UID from namespace> -n instana-postgres -f custom_values.yaml -
Update Postgres by patching cluster configuration.
kubectl patch cluster postgres -n instana-postgres --type=merge --patch ' spec: imageName: artifact-public.instana.io/self-hosted-images/3rd-party/datastore/cnpg-containers:15_v0.37.0 ' -
Complete the steps in Verifying Postgres (online and offline).
Upgrading Postgres offline by using the CloudNative PG operator
Upgrade the Postgres Operator in an air-gapped environment.
If you didn't yet pull the Postgres images from the external registry, you can pull them now. Run the following commands on your bastion host. Then, copy the images to your Instana host that is in your air-gapped environment.
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/operator/cloudnative-pg:v1.28.1_v0.32.0
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/datastore/cnpg-containers:15_v0.37.0
Complete the following steps on your Instana host.
-
Retag the images to your internal image registry.
docker tag artifact-public.instana.io/self-hosted-images/3rd-party/operator/cloudnative-pg:v1.28.1_v0.32.0 <internal-image-registry>/operator/cloudnative-pg:v1.28.1_v0.32.0 docker tag artifact-public.instana.io/self-hosted-images/3rd-party/datastore/cnpg-containers:15_v0.37.0 <internal-image-registry>/datastore/cnpg-containers:15_v0.37.0 -
Push the images to your internal image registry on your bastion host.
docker push <internal-image-registry>/operator/cloudnative-pg:v1.28.1_v0.32.0 docker push <internal-image-registry>/datastore/cnpg-containers:15_v0.37.0 -
If you are upgrading Postgres on a Red Hat® OpenShift® cluster, determine the file system group ID from the
instana-postgresnamespace. Red Hat OpenShift requires that file system groups are within a range of values specific to the namespace.kubectl get namespace instana-postgres -o yamlAn output similar to the following example is shown for the command:
apiVersion: v1 kind: Namespace metadata: annotations: ....... openshift.io/sa.scc.uid-range: 1000750000/10000 creationTimestamp: "2024-01-14T07:04:59Z" labels: kubernetes.io/metadata.name: instana-postgres ....... name: instana-postgresThe
openshift.io/sa.scc.supplemental-groupsannotation contains the range of allowed IDs. The range1000750000/10000indicates 10,000 values starting with ID1000750000, so it specifies the range of IDs from1000750000to1000760000. In this example, the value1000750000might be used as a file system group ID (UID). -
Add
custom_values.yamland specify the toleration and affinity. Skip this step if the file is already created.tolerations: - key: node.instana.io/monitor operator: Equal effect: NoSchedule value: "true" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/monitor operator: In values: - "true" -
Upgrade Postgres Operator. In the following command, update the <download_key> value with your own agent key.
-
Red Hat OpenShift cluster
Use the UID from the previous step as the
<UID from namespace>in the following commands:helm upgrade --install cnpg cloudnative-pg-1.28.0.tgz --set image.repository=image-registry.openshift-image-registry.svc:5000/instana-postgres/cloudnative-pg-operator --set image.tag=v1.28.1_v0.32.0 --version=0.27.1 --set containerSecurityContext.runAsUser=<UID from namespace> --set containerSecurityContext.runAsGroup=<UID from namespace> -n instana-postgres -f custom_values.yaml -
Kubernetes cluster
helm upgrade --install cnpg cloudnative-pg-0.27.1.tgz --set image.repository=<internal-image-registry>/operator/cloudnative-pg --set image.tag=v1.28.1_v0.32.0 --version=0.27.1 -n instana-postgres
-
-
Update Postgres by patching cluster configuration.
kubectl patch cluster postgres -n instana-postgres --type=merge --patch ' spec: imageName: <internal-image-registry>/self-hosted-images/3rd-party/datastore/cnpg-containers:15_v0.37.0 ' -
Complete the steps in Verifying Postgres (online and offline).
Verifying Postgres (online and offline)
Complete these steps to verify the Postgres instance and data store.
-
Verify the Postgres Operator upgrade.
kubectl get all -n instana-postgresIf the Postgres Operator is deployed successfully, then the result of the command is as follows:
NAME READY STATUS RESTARTS AGE pod/postgres-1 1/1 Running 0 100s pod/postgres-2 1/1 Running 0 69s pod/postgres-3 1/1 Running 0 41s pod/cnpg-cloudnative-pg-64bbc87958-fqnrl 1/1 Running 0 11m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT service/cnpg-webhook-service ClusterIP 172.30.66.183 <none> 443/TCP service/postgres-r ClusterIP 172.30.163.146 <none> 5432/TCP service/postgres-ro ClusterIP 172.30.226.75 <none> 5432/TCP service/postgres-rw ClusterIP 172.30.235.178 <none> 5432/TCP NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/cnpg-cloudnative-p 1/1 1 1 11m NAME DESIRED CURRENT READY AGE replicaset.apps/cnpg-cloudnative-pg-64bbc87958 1 1 1 11m -
Make sure that the pods are scheduled on the desired nodes.
kubectl get pods -n instana-postgres -o wideA sample output is shown in the following example.
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cnpg-cloudnative-pg-7cdf76c49c-kztxr 1/1 Running 0 22d 10.254.20.122 worker0.instana-load-testing.cp.fyre.ibm.com <none> <none> postgres-1 1/1 Running 0 22d 10.254.20.124 worker0.instana-load-testing.cp.fyre.ibm.com <none> <none> postgres-3 1/1 Running 0 22d 10.254.16.93 worker1.instana-load-testing.cp.fyre.ibm.com <none> <none> postgres-4 1/1 Running 0 22d 10.254.24.65 worker3.instana-load-testing.cp.fyre.ibm.com <none> <none>