Mounting the Ceph File System as a kernel client

The Ceph File System (CephFS) can be mounted as a kernel client, either manually or automatically on system start.

Before you begin

Before you begin, make sure that you have the following prerequisites in place:
  • Root-level access to a Linux-based client node.
  • Root-level access to a Ceph Monitor node.
  • An existing Ceph File System.

About this task

Important: Clients running on other Linux distributions, aside from Red Hat Enterprise Linux, are permitted but not supported. If issues are found in the CephFS Metadata Server or other parts of the storage cluster when using these clients, IBM will address them. If the cause is found to be on the client side, the kernel vendor of the Linux distribution must address the issue..

Procedure

Configure the client node to use the Ceph storage cluster.
  1. Enable the IBM Storage Ceph 7 Tools repository.

    RHEL 9:

    [root@client01 ~]# curl https://public.dhe.ibm.com/ibmdl/export/pub/storage/ceph/ibm-storage-ceph-7-rhel-9.repo | sudo tee /etc/yum.repos.d/ibm-storage-ceph-7-rhel-9.repo

    Repeat this step on all nodes of the IBM Storage Ceph storage cluster.

  2. Install the ceph-common package.
    [root@client01 ~]# dnf install ceph-common
  3. Log in to the Cephadm shell on the monitor node.
    [root@host01 ~]# cephadm shell
  4. Copy the Ceph client keyring from the Ceph Monitor node to the client node.
    scp /ceph.client.CLIENT_ID.keyring root@CLIENT_NODE_NAME:/etc/ceph/ceph.client.CLIENT_ID.keyring
    Replace CLIENT_NODE_NAME with the Ceph client host name or IP address.
    For example,
    [ceph: root@host01 ~]# scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring
  5. Copy the Ceph configuration file from a Ceph Monitor node to the client node.
    scp /etc/ceph/ceph.conf root@CLIENT_NODE_NAME:/etc/ceph/ceph.conf
    Replace CLIENT_NODE_NAME with the Ceph client host name or IP address.
    For example,
    [ceph: root@host01 ~]# scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf
  6. Configure the client node to use the Ceph storage cluster.
    [root@client01 ~]# chmod 644 /etc/ceph/ceph.conf
  7. Choose either automatically or manually.

Mounting automatically

Procedure

  1. On the client host, create a new directory for mounting the Ceph File System.
    mkdir -p MOUNT_POINT
    For example,
    [root@client01 ~]# mkdir -p /mnt/cephfs
  2. Edit the /etc/fstab file.
    #DEVICE                 PATH           TYPE        OPTIONS               
    MON_0_HOST:PORT,               MOUNT_POINT        ceph      name=CLIENT_ID,        
    MON_1_HOST:PORT,                                             ceph.client_mountpoint=/VOL/SUB_VOL_GROUP/SUB_VOL/UID_SUB_VOL, fs=FILE_SYSTEM_NAME, [ADDITIONAL_OPTIONS]
    MON_2_HOST:PORT:/VOL/SUB_VOL/UID_SUB_VOL                                                                            
    In the /etc/fstab file,
    • The first column (#DEVICE) sets the Ceph Monitor host names and the port number.
    • The second column (PATH) set the mount point.
    • The third column (TYPE) sets the file system type. In this case, ceph, for CephFS.
    • The fourth column (OPTIONS) sets the various options. These include the user name and the secret file that use the name and secretfile options. You can also set specific volumes, subvolume groups, and subvolumes that use the ceph.client_mountpoint option.

      Set the _netdev option to ensure that the file system is mounted after the networking subsystem starts to prevent hanging and networking issues. If you do not need access time information, then setting the noatime option can increase performance.

    • Set the fifth and sixth columns (DUMP and FSCK) to 0.
    For example,
    #DEVICE         PATH                   TYPE    OPTIONS         DUMP  FSCK
    mon1:6789,      /mnt/cephfs            ceph    name=1,            0     0
    mon2:6789,                                     ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0,
    mon3:6789:/                                    fs=cephfs01,
                                                   _netdev,noatime
    The Ceph File System is mounted on the next system start.
    Note:
    • mount.ceph can read keyring files directly. As such, a secret file is no longer necessary. Specify the client ID with name=CLIENT_ID, and mount.ceph will find the correct keyring file.
    • You can replace the Monitor host names with the string :/ and mount.ceph will read the Ceph configuration file to determine which Monitors to connect to.

Mounting manually

Procedure

  1. Create a mount directory on the client node.
    mkdir -p MOUNT_POINT
    For example,
    [root@client01 ~]# mkdir -p /mnt/cephfs
  2. Mount the Ceph File System.
    To specify multiple Ceph Monitor addresses, in the mount command:
    1. Separate them with commas.
    2. Specify the mount point.
    3. Set the client name.
    Note: mount.ceph can read keyring files directly, therefore a secret file is not necessary. It is enough to specify the client ID with name=CLIENT_ID, and mount.ceph finds the correct keyring file.
    mount -t ceph MONITOR-1_NAME:6789,MONITOR-2_NAME:6789,MONITOR-3_NAME:6789:/ MOUNT_POINT -o name=CLIENT_ID,fs=FILE_SYSTEM_NAME
    For example,
    [root@client01 ~]# mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=1,fs=cephfs01
    Note:
    • Configure a DNS server so that a single host name resolves to multiple IP addresses. That single host name can then be used with the mount command, instead of supplying a comma-separated list.
    • You can also replace the Monitor host names with the string :/ and mount.ceph reads the Ceph configuration file to determine which Monitors to connect to.
    • Set the nowsync option to asynchronously run file creation and removal on the IBM Storage Ceph clusters. This improves the performance of some workloads by avoiding round-trip latency for these system calls without impacting consistency. The nowsync option requires kernel clients with Red Hat Enterprise Linux 9.2 or later.
      For example:
      [root@client01 ~]# mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o nowsync,name=1,fs=cephfs01
  3. Verify that the file system is successfully mounted.
    stat -f MOUNT_POINT
    For example,
    [root@client01 ~]# stat -f /mnt/cephfs