Get started

Deploying Ceph

A step-by-step guide to deploying a production Ceph cluster using NodeFoundry.


Prerequisites

  • At least 3 registered nodes (check with nf node list)
  • Available disks on each node (check with nf disks list)
  • Network connectivity between all nodes

Deployment Order

Ceph daemons must be deployed in a specific order:

  1. Monitors (MON) — cluster consensus
  2. Managers (MGR) — metrics and orchestration
  3. OSDs — storage capacity
  4. MDS — only if using CephFS
  5. RGW — only if using S3

Step 1: Deploy Monitors

Deploy exactly 3 (or 5) monitors for quorum. Each monitor needs a dedicated disk.

Terminal
$ nf ceph mon --node <node-1> --disk <disk-1>
$ nf ceph mon --node <node-2> --disk <disk-2>
$ nf ceph mon --node <node-3> --disk <disk-3>

Step 2: Deploy Managers

Deploy at least one manager. Two gives you failover.

Terminal
$ nf ceph mgr --node <node-1>

Step 3: Deploy OSDs

Create one OSD per available disk. This is where your data lives.

Terminal
$ nf disks list    # find available disks
$ nf ceph osd --node <node-1> --disk <disk-a>
$ nf ceph osd --node <node-1> --disk <disk-b>
$ nf ceph osd --node <node-2> --disk <disk-c>
...

Step 4: Verify

Check that all tasks completed:

Terminal
$ nf tasks list

Optional: CephFS

If you need a POSIX filesystem:

Terminal
$ nf ceph mds --node <node-1>
$ nf ceph fs --name myfs --metadataPool myfs_meta --dataPool myfs_data

Optional: S3 Object Storage

If you need S3-compatible storage:

Terminal
$ nf ceph rgw deploy --node <node-1>
$ nf ceph rgw user create --id myuser --name "My User"

The user’s access and secret keys will be returned. Use them with nf s3 or any S3 client.