Configuring the Ceph-CSI driver for Ceph File Systems (CephFS)

As a Kubernetes Administrator, you can use this information to configure the Ceph-CSI driver for use with IBM Storage Ceph File Systems (CephFS) on your cluster.

Before you begin

  • Be sure that the Ceph-CSI driver and relevant YAML files, provided by IBM Sales are installed.
  • Ensure that the Ceph-CSI driver is properly configured for IBM Storage Ceph.

    For more information, see Initial configuration for IBM Storage Ceph.

  • Ensure connectivity between Kubernetes and the Ceph cluster exists by using ping or telnet.
  • Ensure that there is a namespace for the Kubernetes cluster.
    kubectl create namespace ceph-csi-cephfs

About this task

For detailed information about configuring ceph-csi-cephfs Helm charts, see Configuring Helm ceph-csi-cephfs charts.

Procedure

Update the values.yaml file with the relevant values for Ceph-CSI driver configuration.

Important: Keep only the required changes.
Note: For a full list of configurable Ceph-CSI Helm parameters and their default values, see Configuring Helm ceph-csi-rbd charts.

  1. Create a values YAML file.
    For example, ceph-csi-cephfs-values.yaml.
  2. Update the required clusterID and monitor information, provided by the Ceph administrator.
    csiConfig:
      - clusterID: "CLUSTER_ID"
        monitors:
          - "MON_IP_01"
          - "MON_IP_02"
          - "MON_IP_03"
    provisioner:
      name: provisioner
      replicaCount: 2
  3. Optional: Update any other needed parameters.
  4. Verify that all Ceph monitors are reachable from the cluster, by pinging the monitors.
    ping MONITOR_IP
    For example,
    ping 10.85.8.118
  5. Install the Helm install chart.
    For example,
    ​​​​​​
    helm install --namespace ceph-csi-cephfs ceph-csi ceph-csi/ceph-csi-cephfs --set nodeplugin.plugin.image.repository=cp.icr.io/cp/ibm-ceph/cephcsi
    1. Check the Helm status.
      helm status ceph-csi-cephfs -n ceph-csi-cephfs
    2. Verify that the pods are running as expected.
      ​kubectl get pods -n ceph-csi-cephfs
  6. Add the Ceph configuration, by creating a configuration map.
    ​kubectl create configmap ceph-config --from-file=ceph.conf=ceph.conf -n ceph-csi-cephfs --dry-run=client -o yaml | kubectl apply -f
  7. ​Update the configuration file, as required.
    amk:rke_cluster_17_7$ cat ceph.conf
    [global]
    fsid = e40b854e-d179-11ef-91a4-fa163e9eea52
    mon_host = v2:10.0.64.187:3300,v1:10.0.64.187:6789
    v2:10.0.65.157:3300,v1:10.0.65.157:6789
    v2:10.0.64.13:3300,v1:10.0.64.13:6789
    auth_cluster_required = cephx
    auth_service_required = cephx
    auth_client_required = cephx
    fuse_big_writes = true

What to do next

Verify the configuration map and then restart the deployment.
  1. Verify the ceph-csi-cephfs deployment.
    kubectl describe configmap ceph-config -n ceph-csi-cephfs
    The command provides the following output type:
    ​Name:         ceph-config
    Namespace:    ceph-csi-cephfs
    Labels:       app=ceph-csi-cephfs
             app.kubernetes.io/managed-by=Helm
             chart=ceph-csi-cephfs-3.13.0
             component=nodeplugin
             heritage=Helm
             release=ceph-csi-cephfs
    Annotations:  meta.helm.sh/release-name: ceph-csi-cephfs
             meta.helm.sh/release-namespace: ceph-csi-cephfs
    
    Data
    ====
    ceph.conf:
    ----
    [global]
    fsid = e40b854e-d179-11ef-91a4-fa163e9eea52
    mon_host = v2:10.0.64.187:3300,v1:10.0.64.187:6789 v2:10.0.65.157:3300,v1:10.0.65.157:6789 v2:10.0.64.13:3300,v1:10.0.64.13:6789
    auth_cluster_required = cephx
    auth_service_required = cephx
    auth_client_required = cephx
    fuse_big_writes = true
    
    keyring:
    ----
    
    BinaryData
    ====
    
    Events:  <none>
  2. Restart the ceph-csi-cephfs-provisioner deployment.
    ​kubectl rollout restart deployment/ceph-csi-cephfs-provisioner -n ceph-csi-cephfs
    For example,
    kubectl rollout restart deployment/ceph-csi-cephfs-provisioner -n ceph-csi-cephfs
    deployment.apps/ceph-csi-cephfs-provisioner restarted
  3. Restart the ceph-csi-cephfs-nodeplugin daemonset.
    ​kubectl rollout restart daemonset/ceph-csi-cephfs-nodeplugin -n ceph-csi-cephfs
    For example,
    ​kubectl rollout restart daemonset/ceph-csi-cephfs-nodeplugin -n ceph-csi-cephfsdaemonset.apps/ceph-csi-cephfs-nodeplugin restarted
  4. Continue with the following steps:
    1. Create a Secret.
    2. Create a StorageClass
    3. Create a PVC.
    4. Bind the PVC to a Pod resource.