Create multiple Ceph File Systems (CephFS) on a Ceph Monitor node.
Before you begin
Before you begin, make sure that you have the
following prerequisites in place:
- A running, and healthy IBM Storage
Ceph cluster.
- Installation and configuration of the Ceph Metadata Server daemon
(
ceph-mds).
- Root-level access to a Ceph Monitor node.
- Root-level access to a Ceph client node.
About this task
For more information, see:
Procedure
- Configure the client node to use the Ceph storage cluster.
-
Enable the IBM Storage
Ceph
7 tools repository.
[root@client ~]# subscription-manager repos --enable=ibmceph-7-tools-for-rhel-9-x86_64-rpms
- Install the
ceph-fuse package.
[root@client ~]# dnf install ceph-fuse
- Copy the Ceph client keyring from the Ceph Monitor node to the client
node.
scp root@MONITOR_NODE_NAME:/etc/ceph/KEYRING_FILE /etc/ceph/
Replace
MONITOR_NODE_NAME with the Ceph Monitor host name or IP
address.
For
example,
[root@client ~]# scp root@192.168.0.1:/etc/ceph/ceph.client.1.keyring /etc/ceph/
- Copy the Ceph configuration file from a Ceph Monitor node to the client
node.
scp root@MONITOR_NODE_NAME:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
Replace
MONITOR_NODE_NAME with the Ceph Monitor host name or IP
address.
For
example,
[root@client ~]# scp root@192.168.0.1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
- Set the appropriate permissions for the configuration file.
[root@client ~]# chmod 644 /etc/ceph/ceph.conf
- Create a Ceph File System.
ceph fs volume create FILE_SYSTEM_NAME
For
example,
[root@mon ~]# ceph fs volume create cephfs01
Repeat this
step to create more file systems.
Note: By running this command, Ceph automatically creates the new
pools and deploys a new Ceph Metadata Server (MDS) daemon to support the new file system. This
command also configures the MDS affinity.
- Verify access to the new Ceph File System from a Ceph client.
- Authorize a Ceph client to access the new file system.
ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME DIRECTORY PERMISSIONS
Important: The Ceph client can only see the CephFS it is authorized for.
For example,
[root@mon ~]# ceph fs authorize cephfs01 client.1 / rw
[client.1]
key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK==
[root@mon ~]# ceph auth get client.1
exported keyring for client.1
[client.1]
key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK==
caps mds = "allow rw fsname=cephfs01"
caps mon = "allow r fsname=cephfs01"
caps osd = "allow rw tag cephfs data=cephfs01"
Note: Optionally, you can add a safety measure by specifying the
root_squash
option. This prevents accidental deletion scenarios by disallowing clients with a
uid=0 or
gid=0 to do write operations, but still allows read
operations.
In the following example,
root_squash is enabled for the file system
cephfs01, except within the
/volumes directory
tree.
[root@mon ~]# ceph fs authorize cephfs01 client.1 / rw root_squash /volumes rw
[client.1]
key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK==
[root@mon ~]# ceph auth get client.1
[client.1]
key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK==
caps mds = "allow rw fsname=cephfs01 root_squash, allow rw fsname=cephfs01 path=/volumes"
caps mon = "allow r fsname=cephfs01"
caps osd = "allow rw tag cephfs data=cephfs01"
- Copy the Ceph user’s keyring to the Ceph client node.
ceph auth get CLIENT_NAME > OUTPUT_FILE_NAME
scp OUTPUT_FILE_NAMETARGET_NODE_NAME:/etc/ceph
For
example,
[root@mon ~]# ceph auth get client.1 > ceph.client.1.keyring
exported keyring for client.1
[root@mon ~]# scp ceph.client.1.keyring client:/etc/ceph
root@client's password:
ceph.client.1.keyring 100% 178 333.0KB/s 00:00
- On the Ceph client node, create a new directory.
mkdir PATH_TO_NEW_DIRECTORY_NAME
For
example,
[root@client ~]# mkdir /mnt/mycephfs
- On the Ceph client node, mount the new Ceph File System.
ceph-fuse PATH_TO_NEW_DIRECTORY_NAME -n CEPH_USER_NAME --client-fs=FILE_SYSTEM_NAME
For
example,
[root@client ~]# ceph-fuse /mnt/mycephfs/ -n client.1 --client-fs=cephfs01
ceph-fuse[555001]: starting ceph client
2022-05-09T07:33:27.158+0000 7f11feb81200 -1 init, newargv = 0x55fc4269d5d0 newargc=15
ceph-fuse[555001]: starting fuse
- On the Ceph client node, either list the directory contents of the new mount point or
create a file on the new mount point.