Colocation
Use this information to understand how colocation works and its advantages.
You can colocate containerized Ceph daemons on the same host.
-
Significant improvement in total cost of ownership (TCO) at small scale
-
Reduction from six hosts to three for the minimum configuration
-
Easier upgrade
-
Better resource isolation
How colocation works
With the help of the Cephadm orchestrator, you can colocate one daemon from the following list
with one or more OSD daemons (ceph-osd):
-
Ceph Monitor (
ceph-mon) and Ceph Manager (ceph-mgr) daemons -
NFS Ganesha (
nfs-ganesha) for Ceph Object Gateway (nfs-ganesha) -
RBD Mirror (
rbd-mirror) -
Observability Stack (Grafana)
Additionally, for Ceph Object Gateway (radosgw) and Ceph File System
(ceph-mds), you can colocate either with an OSD daemon plus a daemon from the
previous list, excluding RBD mirror.
ceph-mon and ceph-mgr work together
closely they do not count as two separate daemons for the purposes of colocation.With the colocation rules shared above, we have the following minimum clusters sizes that comply with these rules:
Example 1
- Media: Full flash systems (SSDs)
- Use case: Block (RADOS Block Device) and File (CephFS), or Object (Ceph Object Gateway)
- Number of nodes: 3
- Replication scheme: 2
| Host | Daemon | Daemon | Daemon |
|---|---|---|---|
| host1 | OSD | Monitor/Manager | Grafana |
| host2 | OSD | Monitor/Manager | RGW or CephFS |
| host3 | OSD | Monitor/Manager | RGW or CaphFS |

Example 2
- Media: Full flash systems (SSDs) or spinning devices (HDDs)
- Use case: Block (RBD), File (CephFS), and Object (Ceph Object Gateway)
- Number of nodes: 4
- Replication scheme: 3
| Host | Daemon | Daemon | Daemon |
|---|---|---|---|
| host1 | OSD | Grafana | CephFS |
| host2 | OSD | Monitor/Manager | RGW |
| host3 | OSD | Monitor/Manager | RGW |
| host4 | OSD | Monitor/Manager | CephFS |
Example 3
- Media: Full flash systems (SSDs) or spinning devices (HDDs)
- Use case: Block (RBD), Object (Ceph Object Gateway), and NFS for Ceph Object Gateway
- Number of nodes: 4
- Replication scheme: 3
| Host | Daemon | Daemon | Daemon |
|---|---|---|---|
| host1 | OSD | Grafana | |
| host2 | OSD | Monitor/Manager | RGW |
| host3 | OSD | Monitor/Manager | RGW |
| host4 | OSD | Monitor/Manager | NFS (RGW) |

The diagrams below shows the differences between storage clusters with colocated and non-colocated daemons.