Dynamic provisioning in Fusion Data Foundation
IBM Storage Fusion Data Foundation is software-defined storage that is optimized for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers.
Fusion Data Foundation supports a variety of storage types, including:
- Block storage for databases.
- Shared file storage for continuous integration, messaging, and data aggregation.
- Object storage for archival, backup, and media storage.
Fusion Data Foundation uses IBM Storage Ceph to provide the file, block, and object storage that backs persistent volumes, and Rook.io to manage and orchestrate provisioning of persistent volumes and claims. NooBaa provides object storage, and its Multicloud Gateway allows object federation across multiple cloud environments (available as a Technology Preview).
In Fusion Data Foundation the IBM Storage
Ceph Container Storage Interface (CSI) driver for RADOS
Block Device (RBD) and Ceph File System (CephFS) handles the dynamic provisioning requests. When a
PVC request comes in dynamically, the CSI driver has the following options:
- Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on Ceph RBDs
with volume mode
Block
- Create a PVC with ReadWriteOnce (RWO) access that is based on Ceph RBDs with volume mode
Filesystem
- Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on CephFS for
volume mode
Filesystem
- Create a PVC with ReadWriteOncePod (RWOP) access that is based on CephFS,NFS and RBD. With RWOP access mode, you mount the volume as read-write by a single pod on a single node.
Important: The ReadWriteOncePod (RWOP) access mode is a Technology Preview feature.
IBM does not recommend using them in production. These features provide early access to upcoming
product features, enabling customers to test functionality and provide feedback during the
development process.
The judgment of which driver (RBD or CephFS) to use is based on the entry in the storageclass.yaml file.