Server and rack solutions
Ceph provides both optimized server-level and rack-level solution SKUs. Hardware vendors provide both optimized server-level and rack-level solution SKUs. Validated through joint testing with IBM, these solutions offer predictable price-to-performance ratios for Ceph deployments, with a convenient modular approach to expand Ceph storage for specific workloads.
Many hardware vendors now offer both Ceph-optimized servers and rack-level solutions that are designed for distinct workload profiles. IBM works with multiple storage server vendors to test and evaluate specific cluster options for different cluster sizes and workload profiles. This work is to simplify the hardware selection process and reduce risk for organizations. IBM’s exacting methodology combines performance testing with proven guidance for a broad range of cluster capabilities and sizes.
With appropriate storage servers and rack-level solutions, IBM Storage Ceph can provide storage pools that serve various workloads from throughput-sensitive and cost and capacity-focused workloads to emerging IOPS intensive workloads.
- Network switching
- Redundant network switching interconnects the cluster and provides access to clients.
- Ceph MON nodes
- The Ceph monitor is a datastore for the health of the entire cluster, and contains the cluster log. Use a minimum of three monitor nodes for a cluster quorum in production.
- Ceph OSD hosts
- Ceph OSD hosts house the storage capacity for the cluster, with one or more OSDs running per individual storage device. OSD hosts are selected and configured differently depending on both workload optimization and the data devices installed: HDDs, SSDs, or NVMe SSDs.
- IBM Storage Ceph
- Many vendors provide a capacity-based subscription for IBM Storage Ceph bundled with both server and rack-level solution SKUs.
IOPS-optimized solutions
With the growing use of flash storage, organizations increasingly host IOPS-intensive workloads on Ceph storage clusters enabling emulation of high-performance public cloud solutions with private cloud storage. These workloads commonly involve structured data from MySQL-, MariaDB-, or PostgreSQL-based applications.
| Element | Specification |
|---|---|
| CPU | 10 cores per NVMe SSD, assuming a 2 GHz CPU Note: For Non-NVMe SSDs, use two cores per SSD
OSD. |
| RAM | 16 GB baseline, plus 5 GB per OSD |
| Networking | 10-Gigabit Ethernet (GbE) per 2 OSDs |
| OSD media | High-performance, high-endurance enterprise NVMe SSDs |
| OSDs | Two per NVMe SSD |
| BlueStore WAL/DB | High-performance, high-endurance enterprise NVMe SSD, colocated with OSDs |
| Controller | Native PCIe bus |
| Vendor | Small (250 TB) | Medium (1 PB) | Large (2 PB+) |
|---|---|---|---|
|
SuperMicro 1 |
SYS-5038MR-OSD006P |
N/A |
N/A |
| 1For more information, see Supermicro® Total Solution for Ceph. | |||
Throughput-optimized solutions
Throughput-optimized Ceph solutions are usually centered around semi-structured or unstructured data. Large-block sequential I/O is typical.
| Element | Specification |
|---|---|
| CPU | 0.5 cores per HDD, assuming a 2 GHz CPU |
| RAM | 16 GB baseline, plus 5 GB per OSD |
| Networking | 10 GbE per 12 OSDs each for client- and cluster-facing networks |
| OSD media | 7,200 RPM enterprise HDDs |
| OSDs | One per HDD |
| BlueStore WAL/DB | High-performance, high-endurance enterprise NVMe SSD, colocated with OSDs |
| Controller | Just a bunch of disks (JBOD) |
Several vendors provide pre-configured server and rack-level solutions for throughput-optimized Ceph workloads. Extensive testing and evaluation of servers from Supermicro and Quanta Cloud Technologies (QCT) has been conducted.
| Vendor | Small (250 TB) | Medium (1 PB) | Large (2 PB+) |
|---|---|---|---|
|
SuperMicro |
SRS-42E112-Ceph-03 |
SRS-42E136-Ceph-03 |
SRS-42E136-Ceph-03 |
| Vendor | Small (250 TB) | Medium (1 PB) | Large (2 PB+) |
|---|---|---|---|
|
SuperMicro |
SSG-6028R-OSD072P |
SSG-6048-OSD216P |
SSG-6048-OSD216P |
|
QCT1 |
QxStor RCT-200 |
QxStor RCT-400 |
QxStor RCT-400 |
| 1See QCT: QxStor IBM Storage Ceph Edition. | |||
| Vendor | Small (250 TB) | Medium (1 PB) | Large (2 PB+) |
|---|---|---|---|
|
Dell |
PowerEdge R730XD 1 |
DSS 7000 2, twin node |
DSS 7000, twin node |
|
Cisco |
UCS C240 M4 |
UCS C3260 3 |
UCS C3260 4 |
|
Lenovo |
System x3650 M5 |
System x3650 M5 |
N/A |
|
1See Dell PowerEdge R730xd Performance and Sizing Guide for IBM Storage Ceph - A Dell IBM Technical White Paper. |
|||
Cost and capacity-optimized solutions
Cost- and capacity-optimized solutions typically focus on higher capacity, or longer archival scenarios. Data can be either semi-structured or unstructured. Workloads include media archives, big data analytics archives, and machine image backups. Large-block sequential I/O is typical.
| Element | Specification |
|---|---|
| CPU | 0.5 cores per HDD, assuming a 2 GHz CPU |
| RAM | 16 GB baseline, plus 5 GB per OSD |
| Networking | 10-Gigabit Ethernet (GbE) per 12 OSDs for each client and cluster facing network |
| OSD media | 7,200 RPM enterprise HDDs |
| OSDs | One per HDD |
| BlueStore WAL/DB | Colocated on the HDD |
| Controller | Just a bunch of disks (JBOD) |
| Vendor | Small (250 TB) | Medium (1 PB) | Large (2 PB+) |
|---|---|---|---|
|
SuperMicro |
N/A |
SRS-42E136-Ceph-03 |
SRS-42E172-Ceph-03 |
| Vendor | Small (250 TB) | Medium (1 PB) | Large (2 PB+) |
|---|---|---|---|
|
SuperMicro |
N/A |
SSG-6048R-OSD216P 1 |
SSD-6048R-OSD360P |
|
QCT |
N/A |
QxStor RCC-400 |
QxStor RCC-400 |
| 1See Supermicro’s Total Solution for Ceph. | |||
| Vendor | Small (250 TB) | Medium (1 PB) | Large (2 PB+) |
|---|---|---|---|
|
Dell |
N/A |
DSS 7000, twin node |
DSS 7000, twin node |
|
Cisco |
N/A |
UCS C3260 |
UCS C3260 |
|
Lenovo |
N/A |
System x3650 M5 |
N/A |