Adding a Ceph OSD node
To expand the capacity of the IBM Storage Ceph cluster, you can add an OSD node.
Prerequisites
-
A running IBM Storage Ceph cluster.
-
A provisioned node with a network connection.
Procedure
-
Verify that other nodes in the storage cluster can reach the new node by its short host name.
-
Temporarily disable scrubbing:
Example
[ceph: root@host01 /]# ceph osd set noscrub [ceph: root@host01 /]# ceph osd set nodeep-scrub -
Limit the backfill and recovery features:
Syntax
ceph tell DAEMON_TYPE.* injectargs --OPTION_NAME VALUE [--OPTION_NAME VALUE]Example
[ceph: root@host01 /]# ceph tell osd.* injectargs --osd-max-backfills 1 --osd-recovery-max-active 1 --osd-recovery-op-priority 1 -
Extract the cluster’s public SSH keys to a folder:
Syntax
ceph cephadm get-pub-key > ~/PATHExample
[ceph: root@host01 /]# ceph cephadm get-pub-key > ~/ceph.pub -
Copy ceph cluster’s public SSH keys to the root user’s
authorized_keysfile on the new host:Syntax
ssh-copy-id -f -i ~/PATH root@HOST_NAME_2Example
[ceph: root@host01 /]# ssh-copy-id -f -i ~/ceph.pub root@host02 -
Add the new node to the CRUSH map:
Syntax
ceph orch host add NODE_NAME IP_ADDRESSExample
[ceph: root@host01 /]# ceph orch host add host02 10.10.128.70 -
Add an OSD for each disk on the node to the storage cluster.
Important: When adding an OSD node to an IBM Storage Ceph cluster, IBM recommends
adding one OSD daemon at a time and allowing the cluster to recover to an
active+clean state before proceeding to the next OSD.