Prerequisites for Kubernetes Backup Support
Before you can install Kubernetes Backup Support, ensure that all system requirements and prerequisites are met.
For Kubernetes Backup Support system requirements, see Kubernetes Backup Support requirements.
Enabling the VolumeSnapshotDataSource feature
To support copy backup and snapshot restore operations, you must enable the VolumeSnapshotDataSource alpha feature.
For more information about alpha features, see Feature Gates.
- Using the sudo command, edit the following YAML files:
- /etc/kubernetes/manifests/kube-apiserver.yaml
- /etc/kubernetes/manifests/kube-controller-manager.yaml
- /etc/kubernetes/manifests/kube-scheduler.yaml
- In each YAML file, add the following statement within the command
section:
- --feature-gates=VolumeSnapshotDataSource=true
Important: Ensure that you edit the YAML files directly and do not create backup copies of these files in the same directory. The presence of the backup copies in the /etc/kubernetes/manifests directory might negate the changes that you made to enable the VolumeSnapshotDataSource feature gate.You might have to wait a minute or two for the changes to be detected by Kubernetes.
- Verify whether the feature is enabled by issuing the following
commands:
ps aux | grep apiserver | grep feature-gates
ps aux | grep scheduler | grep feature-gates
ps aux | grep controller-manager | grep feature-gates
The output for one of these commands is similar to the following example:root 13121 7.4 2.5 518276 305424 ? Ssl Sep06 120:37 kube-apiserver --authorization-mode=Node,RBAC --advertise-address=192.0.2.0 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=198.51.100.0/24 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --feature-gates=VolumeSnapshotDataSource=true
Verifying whether the metrics server is running
To help optimize product performance, ensure that Kubernetes Metrics Server 0.3.5 or later is installed and running properly on your cluster. The metrics server is required for the Kubernetes Backup Support scheduler to determine the resources that are used by concurrent data mover instances.
If the metrics server does not return data, the number of data movers that are used for backup operations is limited, which might negatively impact performance.
You can verify that the metrics server is installed and returning metrics data by completing the following steps:
- Verify the installation by issuing the following
command:
kubectl get deploy,svc -n kube-system | egrep metrics-server
The output is similar to the following example:deployment.extensions/metrics-server 1/1 1 1 3d4h service/metrics-server ClusterIP 198.51.100.0 <none> 443/TCP 3d4h
- Verify that the metrics server is returning data for all nodes by issuing the following
command:
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
The output is similar to the following example:{"kind":"NodeMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/nodes"},"items":[{"metadata": {"name":"cirrus12","selfLink":"/apis/metrics.k8s.io/v1beta1/nodes/cirrus12", "creationTimestamp":"2019-08-08T23:59:49Z"},"timestamp":"2019-08-08T23:59:08Z", "window":"30s","usage":{"cpu":"1738876098n","memory":"8406880Ki"}}]}
Tip: The command might fail with empty output for the "items" key. This error is likely caused by installing the metrics server with a self-signed certificate. To resolve this issue, install the metrics server with a correctly signed certificate that is recognized by the cluster.
Defining the application and persistent volume claim relationship
You can optionally tie your stateful applications to their persistent volume claims (PVCs) by using an owner-dependent relationship. By defining this relationship, you enable cascading actions for the applications.
For example, scaling up and scaling down an application can cause the scheduled backups of its PVC to be paused and resumed. Similarly, deleting the application causes the deletion of the PVC, which in turn triggers the deletion of the backups.
After an application starts using a PVC to store persistent data, you can reconfigure the PVC definition with its owner application.
The following example is a sample configuration file for a PVC that shows the owner-dependent relationship between an application and a PVC object. The PVC object includes the details of the owner deployment.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: demo-pvc
ownerReferences:
- apiVersion: apps/v1beta1
blockOwnerDeletion: true
kind: Deployment
name: Dept10-deployment
uid: 3b760e89-7da5-11e9-8c5a-0050568ba59c
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd