Deploying the MDS service using the command line interface
Using the Ceph Orchestrator, you can deploy the Metadata Server (MDS)
service using the placement specification in the command line
interface. Ceph File System (CephFS) requires one or more MDS.
NOTE: Ensure you have at least two pools, one for Ceph file system (CephFS) data and one for CephFS metadata.
Prerequisites
A running IBM Storage Ceph cluster.
Hosts are added to the cluster.
All manager, monitor, and OSD daemons are deployed.
Root-level access to all the nodes.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shellThere are two ways of deploying MDS daemons using placement specification:
Method 1
Use
ceph fs volumeto create the MDS daemons. This creates the CephFS volume and pools associated with the CephFS, and also starts the MDS service on the hosts.Syntax
ceph fs volume create FILESYSTEM_NAME --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"NOTE: By default, replicated pools are created for this command.
Example
[ceph: root@host01 /]# ceph fs volume create test --placement="2 host01 host02"
Method 2
Create the pools, CephFS, and then deploy MDS service using placement specification:
Create the pools for CephFS:
Syntax
ceph osd pool create DATA_POOL [PG_NUM] ceph osd pool create METADATA_POOL [PG_NUM]Example
[ceph: root@host01 /]# ceph osd pool create cephfs_data 64 [ceph: root@host01 /]# ceph osd pool create cephfs_metadata 64Typically, the metadata pool can start with a conservative number of Placement Groups (PGs) as it generally has far fewer objects than the data pool. It is possible to increase the number of PGs if needed. The pool sizes range from 64 PGs to 512 PGs. Size the data pool is proportional to the number and sizes of files you expect in the file system.
**IMPORTANT:** For the metadata pool, consider to use: * A higher replication level because any data loss to this pool can make the whole file system inaccessible. * Storage with lower latency such as Solid-State Drive (SSD) disks because this directly affects the observed latency of file system operations on clients.Create the file system for the data pools and metadata pools:
Syntax
ceph fs new FILESYSTEM_NAME METADATA_POOL DATA_POOLExample
[ceph: root@host01 /]# ceph fs new test cephfs_metadata cephfs_dataDeploy MDS service using the
ceph orch applycommand:Syntax
ceph orch apply mds FILESYSTEM_NAME --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"Example
[ceph: root@host01 /]# ceph orch apply mds test --placement="2 host01 host02"
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch lsCheck the CephFS status:
Example
[ceph: root@host01 /]# ceph fs ls [ceph: root@host01 /]# ceph fs statusList the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAMEExample
[ceph: root@host01 /]# ceph orch ps --daemon_type=mds