Updating the Postgres data store on Linux on IBM Z and LinuxONE

Upgrade the Postgres operator.

Before you begin

Make sure that you prepared your online and offline host to pull images from the external repository. Also, ensure that the correct Helm repo is added.

Postgres Operator versions and image tags for deployment

The following images are needed for the pinned Helm chart or operator versions.

Platform Operator versions Helm chart version Image with tag
Linux® x86_64 1.28.1 0.27.1

artifact-public.instana.io/self-hosted-images/3rd-party/operator/cloudnative-pg:v1.28.1_v0.32.0

artifact-public.instana.io/self-hosted-images/3rd-party/datastore/cnpg-containers:15_v0.37.0

Upgrading Postgres online by using the CloudNativePG operator

Complete these steps to upgrade the Postgres Operator in an online environment.

  1. Create custom_values.yaml and specify the toleration and affinity. Skip this step if the file is already created.

    tolerations:
    - key: node.instana.io/monitor
      operator: Equal
      effect: NoSchedule
      value: "true"
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: node-role.kubernetes.io/monitor
                  operator: In
                  values:
                  - "true"
     
  2. Upgrade the Postgres Operator. Use the UID from the previous step as the <UID from namespace> in the following command:

    helm upgrade --install cnpg instana/cloudnative-pg --set image.repository=artifact-public.instana.io/self-hosted-images/3rd-party/operator/cloudnative-pg --set image.tag=v1.28.1_v0.32.0 --version=0.27.1 --set imagePullSecrets[0].name=instana-registry --set containerSecurityContext.runAsUser=<UID from namespace> --set containerSecurityContext.runAsGroup=<UID from namespace> -n instana-postgres -f custom_values.yaml
     
  3. Update Postgres by patching cluster configuration.

    kubectl patch cluster postgres -n instana-postgres --type=merge --patch '
    spec:
      imageName: artifact-public.instana.io/self-hosted-images/3rd-party/datastore/cnpg-containers:15_v0.37.0
    '
     
  4. Complete the steps in Verifying Postgres (online and offline).

Upgrading Postgres offline by using the CloudNative PG operator

Upgrade the Postgres Operator in an air-gapped environment.

If you didn't yet pull the Postgres images from the external registry, you can pull them now. Run the following commands on your bastion host. Then, copy the images to your Instana host that is in your air-gapped environment.

docker pull artifact-public.instana.io/self-hosted-images/3rd-party/operator/cloudnative-pg:v1.28.1_v0.32.0
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/datastore/cnpg-containers:15_v0.37.0
         

Complete the following steps on your Instana host.

  1. Retag the images to your internal image registry.

    docker tag artifact-public.instana.io/self-hosted-images/3rd-party/operator/cloudnative-pg:v1.28.1_v0.32.0 <internal-image-registry>/operator/cloudnative-pg:v1.28.1_v0.32.0
    docker tag artifact-public.instana.io/self-hosted-images/3rd-party/datastore/cnpg-containers:15_v0.37.0 <internal-image-registry>/datastore/cnpg-containers:15_v0.37.0
                   
  2. Push the images to your internal image registry on your bastion host.

    docker push <internal-image-registry>/operator/cloudnative-pg:v1.28.1_v0.32.0
    docker push <internal-image-registry>/datastore/cnpg-containers:15_v0.37.0
                   
  3. If you are upgrading Postgres on a Red Hat® OpenShift® cluster, determine the file system group ID from the instana-postgres namespace. Red Hat OpenShift requires that file system groups are within a range of values specific to the namespace.

    kubectl get namespace instana-postgres -o yaml
     

    An output similar to the following example is shown for the command:

    apiVersion: v1
    kind: Namespace
    metadata:
    annotations:
      .......
      openshift.io/sa.scc.uid-range: 1000750000/10000
      creationTimestamp: "2024-01-14T07:04:59Z"
    labels:
      kubernetes.io/metadata.name: instana-postgres
      .......
      name: instana-postgres
     

    The openshift.io/sa.scc.supplemental-groups annotation contains the range of allowed IDs. The range 1000750000/10000 indicates 10,000 values starting with ID 1000750000, so it specifies the range of IDs from 1000750000 to 1000760000. In this example, the value 1000750000 might be used as a file system group ID (UID).

  4. Add custom_values.yaml and specify the toleration and affinity. Skip this step if the file is already created.

    tolerations:
    - key: node.instana.io/monitor
      operator: Equal
      effect: NoSchedule
      value: "true"
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: node-role.kubernetes.io/monitor
                  operator: In
                  values:
                  - "true"
     
  5. Upgrade Postgres Operator. In the following command, update the <download_key> value with your own agent key.

    • Red Hat OpenShift cluster

      Use the UID from the previous step as the <UID from namespace> in the following commands:

      helm upgrade --install cnpg cloudnative-pg-1.28.0.tgz --set image.repository=image-registry.openshift-image-registry.svc:5000/instana-postgres/cloudnative-pg-operator --set image.tag=v1.28.1_v0.32.0 --version=0.27.1 --set containerSecurityContext.runAsUser=<UID from namespace> --set containerSecurityContext.runAsGroup=<UID from namespace> -n instana-postgres -f custom_values.yaml
       
    • Kubernetes cluster

      helm upgrade --install cnpg cloudnative-pg-0.27.1.tgz --set image.repository=<internal-image-registry>/operator/cloudnative-pg --set image.tag=v1.28.1_v0.32.0 --version=0.27.1 -n instana-postgres
       
  6. Update Postgres by patching cluster configuration.

    kubectl patch cluster postgres -n instana-postgres --type=merge --patch '
    spec:
      imageName: <internal-image-registry>/self-hosted-images/3rd-party/datastore/cnpg-containers:15_v0.37.0
    '
     
  7. Complete the steps in Verifying Postgres (online and offline).

Verifying Postgres (online and offline)

Complete these steps to verify the Postgres instance and data store.

  1. Verify the Postgres Operator upgrade.

    kubectl get all -n instana-postgres
     

    If the Postgres Operator is deployed successfully, then the result of the command is as follows:

    NAME                                      READY    STATUS    RESTARTS    AGE
    pod/postgres-1                             1/1     Running     0         100s
    pod/postgres-2                             1/1     Running     0         69s
    pod/postgres-3                             1/1     Running     0         41s
    pod/cnpg-cloudnative-pg-64bbc87958-fqnrl   1/1     Running     0         11m
    
    NAME                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT
    service/cnpg-webhook-service   ClusterIP   172.30.66.183    <none>        443/TCP
    service/postgres-r             ClusterIP   172.30.163.146   <none>        5432/TCP
    service/postgres-ro            ClusterIP   172.30.226.75    <none>        5432/TCP
    service/postgres-rw            ClusterIP   172.30.235.178   <none>        5432/TCP
    
    NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/cnpg-cloudnative-p   1/1     1            1           11m
    
    NAME                                           DESIRED   CURRENT   READY   AGE
    replicaset.apps/cnpg-cloudnative-pg-64bbc87958   1         1         1       11m
     
  2. Make sure that the pods are scheduled on the desired nodes.

    kubectl get pods -n instana-postgres -o wide
     

    A sample output is shown in the following example.

    NAME                                   READY   STATUS    RESTARTS   AGE   IP              NODE                                           NOMINATED NODE   READINESS GATES
    cnpg-cloudnative-pg-7cdf76c49c-kztxr   1/1     Running   0          22d   10.254.20.122   worker0.instana-load-testing.cp.fyre.ibm.com   <none>           <none>
    postgres-1                             1/1     Running   0          22d   10.254.20.124   worker0.instana-load-testing.cp.fyre.ibm.com   <none>           <none>
    postgres-3                             1/1     Running   0          22d   10.254.16.93    worker1.instana-load-testing.cp.fyre.ibm.com   <none>           <none>
    postgres-4                             1/1     Running   0          22d   10.254.24.65    worker3.instana-load-testing.cp.fyre.ibm.com   <none>           <none>