Introduction to IBM Storage Ceph

IBM Storage Ceph is a highly scalable and reliable object storage solution, which is typically deployed in conjunction with cloud computing solutions like OpenStack, as a standalone storage service, or as network attached storage using interfaces.

All IBM Storage Ceph deployments consist of a storage cluster commonly referred to as the Ceph Storage Cluster or RADOS (Reliable Autonomous Distributed Object Store), which consists of three types of daemons:

  • Ceph Monitors (ceph-mon): Ceph monitors provide a few critical functions such as establishing an agreement about the state of the cluster, maintaining a history of the state of the cluster such as whether an OSD is up and running and in the cluster, providing a list of pools through which clients write and read data, and providing authentication for clients and the Ceph Storage Cluster daemons.

  • Ceph Managers (ceph-mgr): Ceph manager daemons track the status of peering between copies of placement groups distributed across Ceph OSDs, a history of the placement group states, and metrics about the Ceph cluster. They also provide interfaces for external monitoring and management systems.

  • Ceph OSDs (ceph-osd): Ceph Object Storage Daemons (OSDs) store and serve client data, replicate client data to secondary Ceph OSD daemons, track and report to Ceph Monitors on their health and on the health of neighboring OSDs, dynamically recover from failures, and backfill data when the cluster size changes, among other functions.

All IBM Storage Ceph deployments store end-user data in the Ceph Storage Cluster or RADOS (Reliable Autonomous Distributed Object Store). Generally, users DO NOT interact with the Ceph Storage Cluster directly; rather, they interact with a Ceph client.

There are three primary Ceph Storage Cluster clients:

  • Ceph Object Gateway (radosgw): The Ceph Object Gateway, also known as RADOS Gateway, radosgw or rgw provides an object storage service with RESTful APIs. Ceph Object Gateway stores data on behalf of its clients in the Ceph Storage Cluster or RADOS.

  • Ceph Block Device (rbd): The Ceph Block Device provides copy-on-write, thin-provisioned, and cloneable virtual block devices to a Linux kernel via Kernel RBD (krbd) or to cloud computing solutions like OpenStack via librbd.

  • Ceph File System (cephfs): The Ceph File System consists of one or more Metadata Servers (mds), which store the inode portion of a file system as objects on the Ceph Storage Cluster. Ceph file systems can be mounted via a kernel client, a FUSE client, or via the libcephfs library for cloud computing solutions like OpenStack.

Additional clients include librados, which enables developers to create custom applications to interact with the Ceph Storage cluster and command line interface clients for administrative purposes.