Appendix
Know the different configurations of IBM Storage Ready Nodes for IBM Storage Ceph.
IBM Storage Ceph daemons can have different configurations based on OSD, Ceph Object Gateway, MDS, pools, and likewise. For more information, see Operations.
OSD configuration
A typical OSD configuration can be configured with a service configuration yaml
file and running the ceph orch apply command.
An example of an OSD configuration is as follows:
cat osd-hybrid.yaml
service_type: osd
service_id: osd_hybrid
placement:
host_pattern: '*'
spec:
data_devices:
rotational: 1
db_devices:
rotational: 0 Pool configuration
A replicated pool has a minimum of four Ready Nodes.
An example of a replicated pool configuration is as
follows:
ceph osd pool create default.rgw.buckets.data 8192 8192 --autoscale-mode=off
ceph osd pool create default.rgw.buckets.index 2048 2048 --autoscale-mode=off
for i in data index non-ec ;do ceph osd pool application enable default.rgw.buckets.${i} rgw
A 4+2 erasure-coded pool has a minimum of seven Ready Nodes.
An example of a 4+2 erasure-coded pool configuration is as follows:
ceph osd erasure-code-profile set ec42 plugin=jerasure technique=reed_sol_van k=4 m=2 crush-failure-domain=host stripe_unit=4K
ceph osd pool create default.rgw.buckets.data 8192 8192 erasure ec42 --autoscale-mode=off
ceph osd pool create default.rgw.buckets.index 2048 2048 --autoscale-mode=off
ceph osd pool create default.rgw.buckets.non-ec 2048 2048 --autoscale-mode=off
ceph osd pool set default.rgw.buckets.non-ec size 2
for i in data index non-ec ;do ceph osd pool application enable default.rgw.buckets.${i} rgw An 8+3 erasure-coded pool has a minimum of 12 Ready Nodes.
An example of an 8+3 erasure-coded pool configuration is as
follows:
ceph osd erasure-code-profile set ec83 plugin=jerasure technique=reed_sol_van k=8 m=3 crush-failure-domain=host stripe_unit=4K
ceph osd pool create default.rgw.buckets.data 8192 8192 erasure ec83 --autoscale-mode=off
ceph osd pool create default.rgw.buckets.index 2048 2048 --autoscale-mode=off
ceph osd pool create default.rgw.buckets.non-ec 2048 2048 --autoscale-mode=off
ceph osd pool set default.rgw.buckets.non-ec size 2
for i in data index non-ec ;do ceph osd pool application enable default.rgw.buckets.${i} rgw Ceph Object Gateway
Following example is a service configuration file to deploy Ceph Object
Gateway:
service_type: rgw
service_id: object
placement:
label: rgw
count_per_host: 2
networks:
- 192.169.142.0/24
spec:
rgw_frontend_port: 8080
rgw_frontend_ssl_certificate: |
-----BEGIN PRIVATE KEY-----
V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
-----END PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
-----END CERTIFICATE-----
ssl: true