Integrating external Ceph cluster with your IBM Cloud Private cluster

Integrate a Ceph cluster that is configured outside IBM® Cloud Private environment.

IBM Cloud Private uses the Ceph storage by using the Kubernetes in-built kubernetes.io/rbd provisioner. The application workloads use Ceph block storage by using Kubernetes dynamic volume provisioning based on a configured storage class.

Note: Instructions in the following sections are based on Ceph v12.2.10 Luminous and IBM Cloud Private Version 3.1.2.

Prepare your external Ceph cluster

Complete these steps to prepare your Ceph cluster for integration with your IBM Cloud Private cluster. You must be an administrator to run these commands.

Create a Ceph pool and a user ID for the pool

Complete the following steps to create a Ceph pool and a user ID that can be used in your IBM Cloud Private storage class:

  1. Create a Ceph pool.
ceph osd pool create demo 8 8

Following is a sample output:

pool 'demo' created
  1. Assign an RBD application to the pool so that it can be used as a block device.
ceph osd pool application enable demo rbd

Following is a sample output:

enabled application 'rbd' on pool 'demo'
  1. Create a auth user for the pool to mount the RBD volume in your IBM Cloud Private cluster nodes.
ceph auth add client.demo mon 'allow r' osd 'allow rwx pool=demo'

Following is a sample output:

added key for client.demo
  1. Verify that the user is created.
ceph auth ls

Following is a sample output:

installed auth entries:

osd.0
    key: AQB5hEVclZvxFRAAnIhvzBMHgaN+cqpEXQStmQ==
    caps: [mgr] allow profile osd
    caps: [mon] allow profile osd
    caps: [osd] allow *
osd.1
    key: AQCXhEVc40LyNhAABYGlOVafoVXgVQgCttdvIw==
    caps: [mgr] allow profile osd
    caps: [mon] allow profile osd
    caps: [osd] allow *
osd.2
    key: AQDjhEVcaKoIFhAAwiXG6puVjWsrVmzgVv4Q/g==
    caps: [mgr] allow profile osd
    caps: [mon] allow profile osd
    caps: [osd] allow *
client.admin
    key: AQDOe0VcCle6ERAA6L82BeosLNJ7FJwqq5W1+A==
    caps: [mds] allow *
    caps: [mgr] allow *
    caps: [mon] allow *
    caps: [osd] allow *
client.bootstrap-mds
    key: AQDPe0VcGoqTDhAAS/0mJFrdrL+EYkJWJC7BsQ==
    caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
    key: AQDQe0VcCKKmBBAA/wcLWNISSOlD2Ju56Pp71w==
    caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
    key: AQDRe0VcwZcFBhAAXlAQw/wa1qZhWylrJeMr9g==
    caps: [mon] allow profile bootstrap-osd
client.bootstrap-rgw
    key: AQDRe0VcK6NuOxAAQO8SUEMEQRLud/Wls8BBvA==
    caps: [mon] allow profile bootstrap-rgw
client.demo
    key: AQBLflJcPW5UFxAAYIqBmmT3sRdADV7GbArZPQ==
    caps: [mon] allow r
    caps: [osd] allow rwx pool=demo
mgr.sanverm22-master.fyre.ibm.com
    key: AQApfEVcE9RPExAA3RKGvibVhJzOJOH3OYVVRQ==
    caps: [mds] allow *
    caps: [mon] allow profile mgr
    caps: [osd] allow *

Get the Ceph user ID and admin ID keys

Get the keys for the admin user and demo user. The keys are required for creating secrets and a storage class in your IBM Cloud Private cluster.

  1. Get the key for the adminId.

    ceph auth get-key client.admin | base64
    

    Following is a sample output:

    QVFET2UwVmNDbGU2RVJBQTZMODJCZW9zTE5KN0ZKd3FxNVcxK0E9PQ==
    
  2. Get the key for the userID.

    ceph auth get-key client.demo | base64
    

    Following is a sample output:

    QVFCTGZsSmNQVzVVRnhBQVlJcUJtbVQzc1JkQURWN0diQXJaUFE9PQ==
    

Prepare your IBM Cloud Private cluster for integration with the external Ceph cluster

Prepare your IBM Cloud Private cluster nodes

Install Ceph client software on each IBM Cloud Private cluster node where your application pods that use Ceph storage might get scheduled.

Create secrets in your IBM Cloud Private cluster by using Ceph client IDs

Before you begin, install kubectl in your IBM Cloud Private cluster. For more information, see Installing the Kubernetes CLI (kubectl). Complete these tasks on your IBM Cloud Private master node.

Create a secret for the Ceph client ID adminID

Note: For the admin key, see Get the Ceph user ID and admin ID keys.

  1. Use the following YAML file to create secret:

    Note: The secret must be of type kubernetes.io/rbd.

apiVersion: v1
kind: Secret
metadata:
  name: ceph-admin-secret
  namespace: kube-system
type: "kubernetes.io/rbd"
data:
  # ceph auth get-key client.admin | base64
  key: QVFET2UwVmNDbGU2RVJBQTZMODJCZW9zTE5KN0ZKd3FxNVcxK0E9PQ==
  1. Run the following command to create the secret:

    kubectl create -f ceph-admin-secret.yaml
    
  2. Verify that the secret is created.

    kubectl get secrets -n kube-system | grep ceph
    

    Following is a sample output:

    ceph-admin-secret                                           kubernetes.io/rbd                     1      53m
    

Create a secret for the Ceph client ID userID

Note: For the user key, see Get the Ceph user ID and admin ID keys.

  1. Use the following YAML file to create secret:

    Note: The secret must be of type kubernetes.io/rbd.

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
  namespace: kube-system
type: "kubernetes.io/rbd"
data:
  # ceph auth add client.demo mon 'allow r' osd 'allow rwx pool=kube'
  # ceph auth get-key client.demo | base64
  key: QVFCTGZsSmNQVzVVRnhBQVlJcUJtbVQzc1JkQURWN0diQXJaUFE9PQ==
  1. Run the following command to create the secret:

    kubectl create -f ceph-secret.yaml
    
  1. Verify that the secret is created.

    kubectl get secrets -n kube-system | grep ceph
    

    Following is a sample output:

    ceph-admin-secret                                           kubernetes.io/rbd                     1      53m
    ceph-secret                                                 kubernetes.io/rbd                     1      53m
    

Create a storage class in your IBM Cloud Private cluster

Create a storage class for kubernetes.io/rbd provisioner. For more information, see Ceph RBD Opens in a new tab

  1. Use the following YAML file to create a storage class. You must review each parameter and provide values for your Ceph cluster.

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
    name: rbd
    provisioner: kubernetes.io/rbd
    parameters:
    monitors: 1.1.1.1:6789,1.1.1.2:6789,1.1.1.3:6789
    pool: demo
    adminId: admin
    adminSecretNamespace: kube-system
    adminSecretName: ceph-admin-secret
    userId: demo
    userSecretNamespace: kube-system
    userSecretName: ceph-secret
    imageFormat: "2"
    imageFeatures: "layering"
    
  2. Create the storage class.

    kubectl create -f rbd-storage-class.yaml
    storageclass.storage.k8s.io/rbd created
    
  3. Verify that the storage class is successfully created.

    kubectl get sc
    

Following is a sample output:

NAME                       PROVISIONER                    AGE
rbd                        kubernetes.io/rbd              5s

Provision a persistent volume in your IBM Cloud Private cluster

Use the following sample YAML file to create a persistent volume claim (PVC) by using the storage class that you created earlier:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: demo
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: rbd
  resources:
    requests:
      storage: 1Gi

Use this PVC in an application pod and verify that volume is mounted in the pod.