Get started

Storage Concepts

NodeFoundry deploys and manages Ceph, a distributed storage system that provides block, file, and object storage from the same cluster.


Ceph Daemons

DaemonRoleMinimum Count
MON (Monitor)Maintains cluster map, handles consensus3 or 5 (odd number)
MGR (Manager)Collects metrics, provides dashboard1 (2 recommended)
OSD (Object Storage Daemon)Stores data on a physical disk1 per disk
MDS (Metadata Server)Manages CephFS metadata1+ (only if using CephFS)
RGW (RADOS Gateway)Provides S3-compatible API1+ (only if using S3)

Storage Types

Block Storage (RBD)

RADOS Block Device provides thin-provisioned, resizable block volumes. Used by VMs and containers.

File Storage (CephFS)

A POSIX-compliant distributed filesystem. Requires at least one MDS daemon. Create with:

  • nf ceph mds --node <id> — deploy a metadata server
  • nf ceph fs --name <name> --metadataPool <pool> --dataPool <pool> — create the filesystem

Object Storage (RGW/S3)

S3-compatible object storage via RADOS Gateway. Deploy with:

  • nf ceph rgw deploy --node <id> — deploy a gateway
  • nf ceph rgw user create --id <user> --name <name> — create access credentials

Pools and Placement Groups

Pools are logical partitions of storage. Each pool has a replication strategy and a set of placement groups (PGs) that determine how data maps to OSDs.

Pool SettingDescription
Replication factorNumber of copies (typically 3)
Placement groupsHashing buckets for data distribution (auto-calculated in most cases)
Failure domainLevel at which Ceph tolerates failures: host, rack, or zone

Capacity Planning

With 3x replication (the default), your usable capacity is roughly 1/3 of raw capacity:

Raw CapacityUsable (3x replication)
30 TB~10 TB
100 TB~33 TB
500 TB~166 TB

Keep cluster usage below 80% for optimal performance and recovery headroom.