Colocation

Use this information to understand how colocation works and its advantages.

You can colocate containerized Ceph daemons on the same host.

The following are the advantages of colocating some of Ceph’s services:
  • Significant improvement in total cost of ownership (TCO) at small scale

  • Reduction from six hosts to three for the minimum configuration

  • Easier upgrade

  • Better resource isolation

How colocation works

With the help of the Cephadm orchestrator, you can colocate one daemon from the following list with one or more OSD daemons (ceph-osd):

  • Ceph Monitor (ceph-mon) and Ceph Manager (ceph-mgr) daemons

  • NFS Ganesha (nfs-ganesha) for Ceph Object Gateway (nfs-ganesha)

  • RBD Mirror (rbd-mirror)

  • Observability Stack (Grafana)

Additionally, for Ceph Object Gateway (radosgw) and Ceph File System (ceph-mds), you can colocate either with an OSD daemon plus a daemon from the previous list, excluding RBD mirror.

Note: Collocating two of the same kind of daemons on a given node is not supported.
Note: Because ceph-mon and ceph-mgr work together closely they do not count as two separate daemons for the purposes of colocation.
Note: IBM recommends colocating the Ceph Object Gateway with Ceph OSD containers to increase performance.

With the colocation rules shared above, we have the following minimum clusters sizes that comply with these rules:

Example 1

  1. Media: Full flash systems (SSDs)
  2. Use case: Block (RADOS Block Device) and File (CephFS), or Object (Ceph Object Gateway)
  3. Number of nodes: 3
  4. Replication scheme: 2
Table 1. Colocated Daemons Example 1
Host Daemon Daemon Daemon
host1 OSD Monitor/Manager Grafana
host2 OSD Monitor/Manager RGW or CephFS
host3 OSD Monitor/Manager RGW or CaphFS
Note: The minimum size for a storage cluster with three replicas is four nodes. Similarly, the size of a storage cluster with two replicas is a three node cluster. It is a requirement to have a certain number of nodes for the replication factor with an extra node in the cluster to avoid extended periods with the cluster in a degraded state.
Figure 1. Colocated Daemons Example 1
Colocated Daemons Example 1

Example 2

  1. Media: Full flash systems (SSDs) or spinning devices (HDDs)
  2. Use case: Block (RBD), File (CephFS), and Object (Ceph Object Gateway)
  3. Number of nodes: 4
  4. Replication scheme: 3
Table 2. Colocated Daemons Example 2
Host Daemon Daemon Daemon
host1 OSD Grafana CephFS
host2 OSD Monitor/Manager RGW
host3 OSD Monitor/Manager RGW
host4 OSD Monitor/Manager CephFS
Figure 2. Colocated Daemons Example 2
Colocated Daemons Example 2

Example 3

  1. Media: Full flash systems (SSDs) or spinning devices (HDDs)
  2. Use case: Block (RBD), Object (Ceph Object Gateway), and NFS for Ceph Object Gateway
  3. Number of nodes: 4
  4. Replication scheme: 3
Table 3. Colocated Daemons Example 3
Host Daemon Daemon Daemon
host1 OSD Grafana
host2 OSD Monitor/Manager RGW
host3 OSD Monitor/Manager RGW
host4 OSD Monitor/Manager NFS (RGW)
Figure 3. Colocated Daemons Example 3
Colocated Daemons Example 3

The diagrams below shows the differences between storage clusters with colocated and non-colocated daemons.

Figure 4. Colocated Daemons
Colocated Daemons
Figure 5. Non-colocated Daemons
Non-colocated Daemons