Retaining the monitoring data during upgrade

Retain the monitoring data during upgrade.

In IBM® Cloud Private Version 3.1.1, if you dynamically provisioned storage for the monitoring service, the data is lost during upgrade. If you used local storage for the monitoring data, you can complete the steps in the following sections to retain the data during upgrade.

Update existing persistent volumes

  1. Set up the kubectl CLI. See Accessing your cluster from the Kubernetes CLI (kubectl).

  2. Get a list of all persistent volumes (PVs) in your cluster. Make a note of the PV that each component of the monitoring service uses.

    kubectl get pv
    

    The output resembles the following code:

    NAME                           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                       STORAGECLASS               REASON    AGE
    helm-repo-pv                   5Gi        RWO            Delete           Bound     kube-system/helm-repo-pvc                   helm-repo-storage                    1d
    image-manager-10.10.24.83      20Gi       RWO            Retain           Bound     kube-system/image-manager-image-manager-0   image-manager-storage                1d
    logging-datanode-10.10.24.83   20Gi       RWO            Retain           Bound     kube-system/data-logging-elk-data-0         logging-storage-datanode             1d
    mongodb-10.10.24.83            20Gi       RWO            Retain           Bound     kube-system/mongodbdir-icp-mongodb-0        mongodb-storage                      1d
    alertmanager-pv                        1Gi       RWO            Delete           Bound     default/my-release-prometheus-alertmanager                                                           17h
    
  3. Run the following command for each PV that the monitoring service uses. The command changes the reclaim policy of the PV from Delete to Retain:

    kubectl patch pv <PV name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
    

    Following is an example command and output:

    kubectl patch pv alertmanager-pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
    persistentvolume "alertmanager-pv" patched
    
  4. Verify that the PVs are updated.

    kubectl get pv
    

    The output resembles the following code:

    NAME                           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                       STORAGECLASS               REASON    AGE
    helm-repo-pv                   5Gi        RWO            Delete           Bound     kube-system/helm-repo-pvc                   helm-repo-storage                    1d
    image-manager-10.10.24.83      20Gi       RWO            Retain           Bound     kube-system/image-manager-image-manager-0   image-manager-storage                1d
    logging-datanode-10.10.24.83   20Gi       RWO            Retain           Bound     kube-system/data-logging-elk-data-0         logging-storage-datanode             1d
    mongodb-10.10.24.83            20Gi       RWO            Retain           Bound     kube-system/mongodbdir-icp-mongodb-0        mongodb-storage                      1d
    alertmanager-pv                 1Gi       RWO            Retain           Bound     default/my-release-prometheus-alertmanager                                                           17h
    

Create persistent volumes

For each PV that the monitoring service components use, create a new PV. You must assign the new PV to the same node that the old PV was assigned to.

The default storage requirements for PersistentVolumes are as follows:

You must update the PersistentVolume definitions based on the storage requirements that you define in the Helm chart. To ensure that the existing data is preserved during upgrade, you must use the same storage class that you used in the existing PVs.

Following is an example definition of a new PV for Alert manager.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: alertmanager-data
  labels:
    component: alertmanager
  annotations:
    "volume.alpha.kubernetes.io/node-affinity": '{
      "requiredDuringSchedulingIgnoredDuringExecution": {
        "nodeSelectorTerms": [
          { "matchExpressions": [
            { "key": "kubernetes.io/hostname",
              "operator": "In",
              "values": [  "10.10.24.83" ]
            }
          ]}
         ]}
        }'
spec:
  storageClassName: monitoring-storage
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: "/opt/ibm/cfc/monitoring/alertmanager"
  persistentVolumeReclaimPolicy: Retain

Update the config.yaml file

Update the config.yaml file in the /<installation_directory>/cluster folder. See Configuring the monitoring service. Ensure that you change the value of the persistentVolume.enabled parameter from false to true. Also, ensure that you add the name of the storage class that you are using for the persistent volume to the persistentVolume.storageClass parameter.

Next, continue with upgrading your cluster.