Core Ceph components

An IBM Storage Ceph cluster can have a large number of Ceph nodes for limitless scalability, high availability and performance. Use this information to learn how CRUSH enables Ceph to perform seamless operations.

Each node leverages non-proprietary hardware and intelligent Ceph daemons that communicate with each other to:

  • Write and read data

  • Compress data

  • Ensure durability by replicating or erasure coding data

  • Monitor and report on cluster health--also called 'heartbeating'

  • Redistribute data dynamically--also called 'backfilling'

  • Ensure data integrity; and,

  • Recover from failures.

To the Ceph client interface that reads and writes data, an IBM Storage Ceph cluster looks like a simple pool where it stores data. However, librados and the storage cluster perform many complex operations in a manner that is completely transparent to the client interface. Ceph clients and Ceph OSDs both use the CRUSH (Controlled Replication Under Scalable Hashing) algorithm.