Migrating a pool
Sometimes it is necessary to migrate all objects from one pool to another. This is done in cases such as needing to change parameters that cannot be modified on a specific pool. For example, needing to reduce the number of placement groups of a pool.
About this task
Important: When a workload is using only Ceph Block Device images, use the
following procedures:
The migration methods described for Ceph Block Device are more recommended than those
documented here. using the cppool does not preserve all snapshots and snapshot related metadata,
resulting in an unfaithful copy of the data. For example, copying an RBD pool does not completely
copy the image. In this case, snaps are not present and will not work properly. The cppool also does
not preserve the user_version field that some librados users may rely on.If migrating a pool is necessary and your user workloads contain images other than Ceph Block Devices, continue with one of the procedures documented here.
Before you begin
- If using the rados cppool command:
- Read-only access to the pool is required.
- Only use this command if you do not have RBD images and its snaps and
user_versionconsumed by librados.
- If using the local drive RADOS commands, verify that sufficient cluster space is available. Two, three, or more copies of data will be present as per pool replication factor.
Migrating directly
Important: Read-only access to the pool is required during
copy.
ceph osd pool create NEW_POOL PG_NUM [ <other new pool parameters> ]
rados cppool SOURCE_POOL NEW_POOL
ceph osd pool rename SOURCE_POOL NEW_SOURCE_POOL_NAME
ceph osd pool rename NEW_POOL SOURCE_POOLFor
example,[ceph: root@host01 /]# ceph osd pool create pool1 250 [ceph: root@host01 /]# rados cppool pool2 pool1 [ceph: root@host01 /]# ceph osd pool rename pool2 pool3 [ceph: root@host01 /]# ceph osd pool rename pool1 pool2