File storage
IBM Storage Ceph provides file storage with the Ceph File System (CephFS), with NFS on CephFS, or with SMB on CephFS. CephFS provides shared file access to an IBM Storage Ceph cluster and uses POSIX semantics wherever possible. CephFS is built on top of the Ceph distributed object store, called RADOS (Reliable Autonomic Distributed Object Store). CephFS inherits all of the architectural benefits of RADOS, providing high availability, built-in data redundancy, shared and parallel data access and an infinitely scalable architecture.
NFS and SMB protocols are also supported to access CephFS file shares. SMB is currently in Technology Preview.
Common use cases
- File System as a Service (FSaaS).
- General purpose network file system.
- Home directories.
- Applications that require multiple readers, writers, or both. For example, High Performance Computing (HPC).
- Media and content repositories.
Example use cases for file storage as a service (FSaaS)
Get to know the generic working configurations that can be configured within the context of your environment. For detailed information about the procedures see the relevant sections within the documentation.
- Example workflow for using Ceph File System
- Example workflow for using NFS for CephFS
- Example workflow for using CephFS snapshot
- Example workflow for using SMB for CephFS (Technology Preview)
- Example workflow for mounting a samba share manually using SMB through Microsoft Windows Explorer
- Example workflow for mounting a samba share manually using SMB on Microsoft Windows command-line
- Example workflow for mounting a samba share automatically on server start using SMB through Microsoft Windows Explorer
Example workflow for using Ceph File System
This information provides an example workflow for using CephFS.
- Create a Ceph File system volume or subvolume.
For detailed information, see Managing Ceph File System volumes, subvolume groups, and subvolumes.
[root@client01 ~]# curl https://public.dhe.ibm.com/ibmdl/export/pub/storage/ceph/ibm-storage-ceph-8-rhel-9.repo | sudo tee /etc/yum.repos.d/ibm-storage-ceph-8-rhel-9.repo [root@client01 ~]# dnf install ceph-common [root@host01 ~]# cephadm shell [root@client ~]# subscription-manager repos --enable=ibmceph-8-tools-for-rhel-9-x86_64-rpms [root@client ~]# dnf install ceph-fuse [root@client ~]# scp root@192.168.0.1:/etc/ceph/ceph.client.1.keyring /etc/ceph/ [root@client ~]# scp root@192.168.0.1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf [root@client ~]# chmod 644 /etc/ceph/ceph.conf [root@mon ~]# ceph fs volume create cephfs01
- Set Ceph client credentials for CephFS.
For detailed information, see Creating client users for a Ceph File System.
Part of setting client credentials is to restrict the client to either the entire temp directory or to only write in thetempdirectory of filesystemcephfs_a. This example provides the instructions for restricting the client to the temp directory.[ceph: root@host01 ~]# ceph fs authorize cephfs_a client.1 /temp rw [ceph: root@host01 ~]# ceph auth get client.1 client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A== caps mds = "allow r, allow rw path=/temp" caps mon = "allow r" caps osd = "allow rw tag cephfs data=cephfs_a" [ceph: root@host01 ~]# ceph auth get client.1 -o ceph.client.1.keyring exported keyring for client.1 [ceph: root@host01 ~]# scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring [root@client01 ~]# chmod 644 /etc/ceph/ceph.client.1.keyring
- Mount a CephFS volume on a client. The volume can be mounted as either a kernel or a FUSE client.
- This example shows the steps for mounting file storage as a kernel client with automatic
mounting.
[root@client01 ~]# ceph auth get-key client.1 > /etc/ceph/1.secreat mount -t ceph 10.0.195.41,10.0.195.177,10.0.195.78:/ /mnt/cephfs -o name=1,secretfile=/etc/ceph/1.secret,fs=cephfs-ec
For detailed information, see Deploying the CephFS.- This example shows the steps for mounting as a FUSE
client.
[root@client ~]# mkdir /mnt/mycephfs [root@client ~]# ceph-fuse /mnt/mycephfs/ -n client.1 --client-fs=cephfs01 ceph-fuse[555001]: starting ceph client 2022-05-09T07:33:27.158+0000 7f11feb81200 -1 init, newargv = 0x55fc4269d5d0 newargc=15 ceph-fuse[555001]: starting fuse
- This example shows the steps for mounting file storage as a kernel client with automatic
mounting.
Example workflow for using CephFS snapshot
This information provides an example workflow for using CephFS snapshots. For detailed information, see Creating a snapshot for a Ceph File System.
- Create a snapshot for a CephFS.
[ceph: root@host01 ~]# cd /mnt/cephfs [ceph: root@host01 ~]# touch before_snapshot [ceph: root@host01 cephfs ~]# mkdir .snap/newsnap
Note: The CLI command can be used to create subvolume snapshots. Whilemkdirandrmdircan still be used to create and delete snapshots, the CLI command is the most current method and is recommended for use. - Verify and use the snapshot.
[ceph: root@host01 cephfs ~]# touch after_snapshot [ceph: root@host01 cephfs ~]# ls -al .snap/newsnap
- Delete the snapshots from a CephFS.Important: Attempting to delete root-level snapshots, which might contain underlying snapshots, will fail.
[ceph: root@host01 ~]# rmdir .snap/newsnap [ceph: root@host01 ~]# ls -al .snap/
Example workflow for using NFS for CephFS
This information provides an example workflow for using NFS on CephFS.
- Get the absolute subvolume path.
ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME] [ceph: root@host01 /]# # ceph fs subvolume getpath cephfs sub0 --group_name subgroup0 /volumes/subgroup0/sub0/d9d731bc-8acb-415c-a2de-7dc4c5827a28
This volume will be shared with NFS. For detailed information, see Managing CephFS volumes.
Creating a Ceph File System volume creates the Ceph File system along with the data and metadata pools.
-
Create an NFS export by using the subvolume path.Note: It is recommended to use the subvolume for creating export instead of creating NFS export directly on CephFS filesystem.For detailed information, see Exporting Ceph File System namespaces over the NFS protocol.
[ceph: root@host01 /]# ceph nfs export create cephfs nfs-cephfs /ceph cephfs --path=/volumes/subgroup0/sub0/d9d731bc-8acb-415c-a2de-7dc4c5827a28 { "bind": "/ceph", "cluster": "nfs-cephfs", "fs": "cephfs", "mode": "RW", "path": "/volumes/subgroup0/sub0/d9d731bc-8acb-415c-a2de-7dc4c5827a28" } [ceph: root@host01 /]# ceph nfs export get nfs-cephfs /ceph { "access_type": "RW", "clients": [], "cluster_id": "nfs-cephfs", "export_id": 1, "fsal": { "cmount_path": "/", "fs_name": "cephfs", "name": "CEPH", "user_id": "nfs.nfs-cephfs.cephfs.070ae927" }, "path": "/volumes/subgroup0/sub0/d9d731bc-8acb-415c-a2de-7dc4c5827a28", "protocols": [ 3, 4 ], "pseudo": "/ceph", "security_label": true, "squash": "none", "transports": [ "TCP" ] } [ceph: root@host01 /]# ceph nfs export ls nfs-cephfs --detailed [ { "access_type": "RW", "clients": [], "cluster_id": "nfs-cephfs", "export_id": 1, "fsal": { "cmount_path": "/", "fs_name": "cephfs", "name": "CEPH", "user_id": "nfs.nfs-cephfs.cephfs.070ae927" }, "path": "/volumes/subgroup0/sub0/d9d731bc-8acb-415c-a2de-7dc4c5827a28", "protocols": [ 3, 4 ], "pseudo": "/ceph", "security_label": true, "squash": "none", "transports": [ "TCP" ] } ] [ceph: root@host01 /]# ceph nfs export info nfs-cephfs /ceph { "access_type": "RW", "clients": [], "cluster_id": "nfs-cephfs", "export_id": 1, "fsal": { "cmount_path": "/", "fs_name": "cephfs", "name": "CEPH", "user_id": "nfs.nfs-cephfs.cephfs.070ae927" }, "path": "/volumes/subgroup0/sub0/d9d731bc-8acb-415c-a2de-7dc4c5827a28", "protocols": [ 3, 4 ], "pseudo": "/ceph", "security_label": true, "squash": "none", "transports": [ "TCP" ] } - Mount a CephFS volume or subvolume by using NFS.For detailed information, see Exporting Ceph File System namespaces over the NFS protocol.
[root@client01 ~]# mount -t nfs -o port=2049 host01:/ceph /mnt/nfs-cephfs
Example workflow for using SMB for CephFS (Technology Preview)
- Enable the SMB module for a CephFS.
ceph mgr module enable smb
- Creating an SMB cluster.Important: You need to specify the authentication mode (either
userfor standalone oractive-directoryfor AD domain member.[ceph smb cluster create <cluster_id> {user | active-directory} [--domain-realm=<domain_realm>] [--domain-join-user-pass=<domain_join_user_pass>] [--define-user-pass=<define_user_pass>] [--custom-dns=<custom_dns>] [--placement=<placement>] [--clustering=<clustering>]Note: Thesmbmodule will automatically deploy logical clusters on hosts using cephadm orchestration. This orchestration is automatically triggered when a cluster has been configured for at least one share. Theplacementfield of the cluster resource is passed onto the orchestration layer and is used to determine on what nodes of the Ceph cluster Samba containers will be run. - Applying the configuration. The following example illustrates an smb.yaml
file that creates an SMB share on top of a CephFS subvolume, based on the provided
content.
ceph smb apply -i smb.yaml Auth_mode: user - resource_type: ceph.smb.cluster cluster_id: smb1 auth_mode: user user_group_settings: - {source_type: resource, ref: ug1} placement: count: 1 - resource_type: ceph.smb.usersgroups users_groups_id: ug1 values: users: - {name: user1, password: passwd} - {name: user2, password: passwd} groups: [] - resource_type: ceph.smb.share cluster_id: smb1 share_id: share1 cephfs: volume: cephfs subvolumegroup: smb subvolume: sv1 path: / - resource_type: ceph.smb.share cluster_id: smb1 share_id: share2 cephfs: volume: cephfs subvolumegroup: smb subvolume: sv2 path: / Auth_mode: AD - resource_type: ceph.smb.cluster cluster_id: modtest1 auth_mode: active-directory domain_settings: realm: samba.qe join_sources: - source_type: resource ref: join1-admin custom_dns: - 10.70.44.153 placement: count: 1 resource_type: ceph.smb.join.auth auth_id: join1-admin auth: username: Administrator password: Redhat@123 resource_type: ceph.smb.share cluster_id: modtest1 share_id: share1 cephfs: volume: cephfs subvolumegroup: smb subvolume: sv1 path: / - resource_type: ceph.smb.share cluster_id: modtest1 share_id: share2 cephfs: volume: cephfs subvolumegroup: smb subvolume: sv2 path: /
Example workflow for mounting a samba share manually using SMB through Microsoft Windows Explorer
This information provides an example workflow for mounting a samba share manually using SMB through Microsoft Windows Explorer.
- In Windows Explorer, click to open the Map Network Drive screen.
- Select the drive letter using the Drive list.
- In the Folder text box, specify the path of the server and the shared resource in the following format: \\SERVER_NAME\SHARE.
- Click Finish to complete the process, and display the network drive in Windows Explorer.
- Navigate to the network drive to verify it has mounted correctly.
Example workflow for mounting a samba share manually using SMB on Microsoft Windows command-line
This information provides an example workflow for mounting a samba share manually using SMB on Microsoft Windows command-line.
- Click , and then type cmd.
- Enter net use z: \\SERVER_NAME\SHARE , where z: is the drive letter to
assign to the shared volume. For example,
net use z: \\server1\share1
- Navigate to the network drive to verify it has mounted correctly.
Example workflow for mounting a samba share automatically on server start using SMB through Microsoft Windows Explorer
This information provides an example workflow for mounting a samba share automatically on server start using SMB through Microsoft Windows Explorer.
- In Windows Explorer, go to and open Map Network Drive.
- Select the drive letter using the Drive drop-down list.
- In the Folder field, specify the path of the server and the shared resource in the following format: \SERVER_NAME\Share
- Select Reconnect at logon.
- Complete the process by clicking Finish. The network drive displays in Windows Explorer.
- If the Windows Security dialog displays, enter the username and password and click OK.
- Navigate to the network drive to verify it has mounted correctly.