By default, Ceph uses replicated pools for data pools. However, another erasure-coded
data pool to the Ceph File System can be added, as needed.
Before you begin
Before you begin, make sure that you have the
following prerequisites in place:
- A running IBM Storage Ceph cluster.
- An existing Ceph File System.
- Pools that use BlueStore OSDs.
- Root-level access to a Ceph Monitor node.
- The
attr package installed.
About this task
Ceph File Systems (CephFS) backed by erasure-coded pools use less overall storage compared
to Ceph File Systems that are backed by replicated pools. While erasure-coded pools use less overall
storage, they also use more memory and processor resources than replicated pools.Important: For production environments, use the default replicated data pool for CephFS. The
creation of inodes in CephFS creates at least one object in the default data pool. It is better to
use a replicated pool for the default data to improve small-object write performance, and to improve
read performance for updating backtraces.
For more information, see:
Procedure
- Create an erasure-coded data pool for CephFS.
ceph osd pool create DATA_POOL_NAME erasure
For
example,
[root@mon ~]# ceph osd pool create cephfs-data-ec01 erasure
pool 'cephfs-data-ec01' created
- Verify that the pool was added.
[root@mon ~]# ceph osd lspools
- Enable overwrites on the erasure-coded pool.
ceph osd pool set DATA_POOL_NAME allow_ec_overwrites true
For
example,
[root@mon ~]# ceph osd pool set cephfs-data-ec01 allow_ec_overwrites true
set pool 15 allow_ec_overwrites to true
- Verify the status of the Ceph File System.
ceph fs status FILE_SYSTEM_NAME
For example,
[root@mon ~]# ceph fs status cephfs-ec
cephfs-ec - 14 clients
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921
POOL TYPE USED AVAIL
cephfs-metadata-ec metadata 787M 8274G
cephfs-data-ec data 2360G 12.1T
STANDBY MDS
cephfs-ec.example.irsrql
cephfs-ec.example.cauuaj - Add the erasure-coded data pool to the existing CephFS.
ceph fs add_data_pool FILE_SYSTEM_NAME DATA_POOL_NAME
In the following example, the new data pool,
cephfs-data-ec01, is
added to the existing erasure-coded file system,
cephfs-ec.
[root@mon ~]# ceph fs add_data_pool cephfs-ec cephfs-data-ec01
- Verify that the erasure-coded pool was added to the Ceph File System.
ceph fs status FILE_SYSTEM_NAME
For example,
[root@mon ~]# ceph fs status cephfs-ec
cephfs-ec - 14 clients
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921
POOL TYPE USED AVAIL
cephfs-metadata-ec metadata 787M 8274G
cephfs-data-ec data 2360G 12.1T
cephfs-data-ec01 data 0 12.1T
STANDBY MDS
cephfs-ec.example.irsrql
cephfs-ec.example.cauuaj - Set the file layout on a new directory.
mkdir PATH_TO_DIRECTORY
setfattr -n ceph.dir.layout.pool -v DATA_POOL_NAME PATH_TO_DIRECTORY
In the following example, all new files that are created in the
/mnt/cephfs/newdir directory inherit the directory layout and places the data in
the newly added erasure-coded pool.
[root@mon ~]# mkdir /mnt/cephfs/newdir
[root@mon ~]# setfattr -n ceph.dir.layout.pool -v cephfs-data-ec01 /mnt/cephfs/newdir