Creating a pool

Learn how to create a pool.

Before creating pools, see Pools, placement groups, and CRUSH configuration.

Note: The system administrators must expressly enable a pool to receive I/O operations from Ceph clients. See Enabling a client application for details. Failure to enable a pool will result in a HEALTH_WARN status.

It is better to adjust the default value for the number of placement groups in the Ceph configuration file, as the default value does not have to suit your needs.

Example

osd pool default pg num = 100
osd pool default pgp num = 100

To create a replicated pool, execute:

Syntax

ceph osd pool create POOL_NAME PG_NUM PGP_NUM [replicated] 
         [CRUSH_RULE_NAME] [EXPECTED_NUMBER_OBJECTS]

To create an erasure-coded pool, execute:

Syntax

ceph osd pool create POOL_NAME PG_NUM PGP_NUM erasure 
         [ERASURE_CODE_PROFILE] [CRUSH_RULE_NAME] [EXPECTED_NUMBER_OBJECTS]

Create a bulk pool:

Syntax

ceph osd pool create POOL_NAME [--bulk]

Where:

POOL_NAME
Description
The name of the pool. It must be unique.

Type
String

Required
Yes. If not specified, it is set to the value listed in the Ceph configuration file or to the default value.

Default
ceph

PG_NUM
Description
The total number of placement groups for the pool. For more information about calculating a suitable number, see Placement Groups and Ceph Placement Groups (PGs) per Pool Calculator on the Red Hat Customer Portal. The default value 8 is not suitable for most systems.

Type
Integer

Required
Yes

Default
8

PGP_NUM
Description
The total number of placement groups for placement purposes. This value must be equal to the total number of placement groups, except for placement group splitting scenarios.

Type
Integer

Required
Yes. If not specified it is set to the value listed in the Ceph configuration file or to the default value.

Default
8

replicated or erasure
Description
The pool type can be either replicated to recover from lost OSDs by keeping multiple copies of the objects or erasure to get a kind of generalized RAID5 capability. The replicated pools require more raw storage but implement all Ceph operations. The erasure-coded pools require less raw storage but only implement a subset of the available operations.

Type
String

Required
No

Default
replicated

crush-rule-name
Description
The name of the crush rule for the pool. The rule MUST exist. For replicated pools, the name is the rule specified by the osd_pool_default_crush_rule configuration setting. For erasure-coded pools the name is erasure-code if you specify the default erasure code profile or POOL_NAME otherwise. Ceph creates this rule with the specified name implicitly if the rule doesn’t already exist.

Type
String

Required
No

Default
Uses erasure-code for an erasure-coded pool. For replicated pools, it uses the value of the osd_pool_default_crush_rule variable from the Ceph configuration.

expected-num-objects
Description
The expected number of objects for the pool. By setting this value together with a negative filestore_merge_threshold variable, Ceph splits the placement groups at pool creation time to avoid the latency impact to perform runtime directory splitting.

Type
Integer

Required
No

Default
0, no splitting at the pool creation time

erasure-code-profile
Description
For erasure-coded pools only. Use the erasure code profile. It must be an existing profile as defined by the osd erasure-code-profile set variable in the Ceph configuration file. For more information, see Erasure code profiles.

Type
String

Required
No

When you create a pool, set the number of placement groups to a reasonable value (for example to 100). Consider the total number of placement groups per OSD too. Placement groups are computationally expensive, so performance will degrade when you have many pools with many placement groups, for example, 50 pools with 100 placement groups each. The point of diminishing returns depends upon the power of the OSD host.