Adding a Ceph OSD node

To expand the capacity of the IBM Storage Ceph cluster, you can add an OSD node.

Prerequisites

  • A running IBM Storage Ceph cluster.

  • A provisioned node with a network connection.

Procedure

  1. Verify that other nodes in the storage cluster can reach the new node by its short host name.

  2. Temporarily disable scrubbing:

    Example

    [ceph: root@host01 /]# ceph osd set noscrub
    [ceph: root@host01 /]# ceph osd set nodeep-scrub
  3. Limit the backfill and recovery features:

    Syntax

    ceph tell DAEMON_TYPE.* injectargs --OPTION_NAME VALUE [--OPTION_NAME VALUE]

    Example

    [ceph: root@host01 /]# ceph tell osd.* injectargs --osd-max-backfills 1 --osd-recovery-max-active 1 --osd-recovery-op-priority 1
  4. Extract the cluster’s public SSH keys to a folder:

    Syntax

    ceph cephadm get-pub-key > ~/PATH

    Example

    [ceph: root@host01 /]# ceph cephadm get-pub-key > ~/ceph.pub
  5. Copy ceph cluster’s public SSH keys to the root user’s authorized_keys file on the new host:

    Syntax

    ssh-copy-id -f -i ~/PATH root@HOST_NAME_2

    Example

    [ceph: root@host01 /]# ssh-copy-id -f -i ~/ceph.pub root@host02
  6. Add the new node to the CRUSH map:

    Syntax

    ceph orch host add NODE_NAME IP_ADDRESS

    Example

    [ceph: root@host01 /]# ceph orch host add host02 10.10.128.70
  7. Add an OSD for each disk on the node to the storage cluster.

Important: When adding an OSD node to an IBM Storage Ceph cluster, IBM recommends adding one OSD daemon at a time and allowing the cluster to recover to an active+clean state before proceeding to the next OSD.