Cluster creation
The rook-operator
and Rook complete the cluster component configuration
and setup.
After the ocs-operator
creates the CephCluster
CR, the
rook-operator
creates the Ceph cluster according to the desired configuration.
The rook-operator
configures the various components, as detailed in Table 1.
Component | Description |
---|---|
Ceph mon daemons |
Three Ceph mon daemons are started on different nodes in the cluster. They
manage the core metadata for the Ceph cluster and they must form a majority quorum. The metadata for
each mon is backed either by a PV if it is in a cloud environment or a path on the
local host if it is in a local storage device environment. |
Ceph mgr daemon |
This daemon is started and it gathers metrics for the cluster and report them to Prometheus. |
Ceph OSDs | These OSDs are created according to the configuration of the
storageClassDeviceSets . Each OSD consumes a PV that stores the user data. By
default, Ceph maintains three replicas of the application data across different OSDs for high
durability and availability using the CRUSH algorithm. |
CSI provisioners | These provisioners are started for RBD and CephFS . When volumes are
requested for the storage classes of Fusion Data Foundation,
the requests are directed to the Ceph-CSI driver to provision the volumes in
Ceph. |
CSI volume plugins and CephFS
|
The CSI volume plugins for RBD and CephFS are started on each node in the
cluster. The volume plugin needs to be running wherever the Ceph volumes are required to be mounted
by the applications. |
After the CephCluster
CR is configured, Rook reconciles the remaining Ceph CRs
to complete the setup, as detailed in Table 2.
Ceph CRs | Description |
---|---|
CephBlockPool
|
The CephBlockPool CR provides the configuration for Rook operator to create
Ceph pools for RWO volumes. |
CephFilesystem
|
The CephFilesystem CR instructs the Rook operator to configure a shared file
system with CephFS, typically for RWX volumes. The CephFS metadata server (MDS) is started to manage
the shared volumes. |
CephObjectStore
|
The CephObjectStore CR instructs the Rook operator to configure an object
store with the RGW service. |
CephObjectStoreUser CR |
The CephObjectStoreUser CR instructs the Rook operator to configure an
object store user for NooBaa to consume, publishing access/private key as well as
the CephObjectStore endpoint. |
The operator monitors the Ceph health to ensure that storage platform remains healthy. If a
mon
daemon goes down for too long a period (10 minutes), Rook starts a new
mon
in its place so that the full quorum can be fully restored.
When the ocs-operator
updates the CephCluster
CR, Rook
immediately responds to the requested changes to update the cluster configuration.