Block storage

IBM Storage Ceph uses block storage, and refers to this as Ceph Block Devices. Block-based storage interfaces are the most common way to store data with rotating media such as hard drives and both SSD and NVMe flash storage.

Ceph Block Devices interact with OSDs by using the librbd library.

Ceph Block Devices deliver high performance with infinite scalability to Kernel Virtual Machines (KVMs), such as Quick Emulator (QEMU), and cloud-based computing systems, like OpenStack, that rely on the libvirt and QEMU utilities to integrate with Ceph Block Devices. You can use the same storage cluster to operate the Ceph Object Gateway and Ceph Block Devices simultaneously.

Ceph Block Devices can easily be managed through either the Ceph dashboard or command-line interface (CLI) commands. For detailed information about Ceph Block Devices, see Ceph Block Devices.

Common workloads

The following are the most common workloads for using Ceph Block Devices:

Database store
Use for data protection application database backup.
Device mirroring
Use to protect against data loss or site failures.
Data resiliency
Use for replication and erasure coding.

Common use cases

The following are some of the most common uses cases for Ceph Block Devices:

  • Storage for Virtualization, Hypervisors, and private clouds.
  • Persistent storage for containers (Kubernetes).
  • Storage for applications that require low-latency and high performance, such as databases.
  • Virtual machines for VMware, OpenStack and OpenShift.
  • Unified file and object storage.
    • Ingest data via file protocols from existing (legacy) enterprise applications into a common repository based on an object store.
    • Once it is in the repository, access the data via S3 for analytics, machine-learning, and so on.

Example workflow for using Ceph Block Device snapshots and clones

Learn the generic working configurations that can be configured within the context of your environment. For detailed information about the procedures see the relevant sections within the documentation.

This information provides an example workflow for using Ceph Block Device snapshots and clones. This workflow includes creating, listing, renaming, layering, protecting, and cloning Ceph Block Device snapshots.

For detailed information, see Managing snapshots.

  1. Create, list, and rename Ceph Block Device snapshots.
    [root@rbd-client ~]# rbd --pool pool1 snap create --snap snap1 image1
    [root@rbd-client ~]# rbd --pool pool1 --image image1 snap ls
    [root@rbd-client ~]# rbd snap ls pool1/image1
    [root@rbd-client ~]# rbd snap rename data/dataset@snap1 data/dataset@snap2
  2. Layer Ceph Block Device snapshots.

    Within the ceph.conf file, enable this feature by adding rbd_clone_copy_on_read = true into either the [global] or [client] section.

    For detailed information, see Ceph Block Device layering.

  3. Protect and clone Ceph Block Device snapshots.
    [root@rbd-client ~]# ceph osd set-require-min-compat-client mimic
    [root@rbd-client ~]# rbd clone --pool pool1 --image image1 --snap snap2 --dest-pool pool2 --dest childimage1
    [root@rbd-client ~]# rbd clone pool1/image1@snap1 pool1/childimage1