Deploying Ceph
A step-by-step guide to deploying a production Ceph cluster using NodeFoundry.
Prerequisites
- At least 3 registered nodes (check with
nf node list) - Available disks on each node (check with
nf disks list) - Network connectivity between all nodes
Deployment Order
Ceph daemons must be deployed in a specific order:
- Monitors (MON) — cluster consensus
- Managers (MGR) — metrics and orchestration
- OSDs — storage capacity
- MDS — only if using CephFS
- RGW — only if using S3
Step 1: Deploy Monitors
Deploy exactly 3 (or 5) monitors for quorum. Each monitor needs a dedicated disk.
$ nf ceph mon --node <node-1> --disk <disk-1> $ nf ceph mon --node <node-2> --disk <disk-2> $ nf ceph mon --node <node-3> --disk <disk-3>
Step 2: Deploy Managers
Deploy at least one manager. Two gives you failover.
$ nf ceph mgr --node <node-1>
Step 3: Deploy OSDs
Create one OSD per available disk. This is where your data lives.
$ nf disks list # find available disks $ nf ceph osd --node <node-1> --disk <disk-a> $ nf ceph osd --node <node-1> --disk <disk-b> $ nf ceph osd --node <node-2> --disk <disk-c> ...
Step 4: Verify
Check that all tasks completed:
$ nf tasks list
Optional: CephFS
If you need a POSIX filesystem:
$ nf ceph mds --node <node-1> $ nf ceph fs --name myfs --metadataPool myfs_meta --dataPool myfs_data
Optional: S3 Object Storage
If you need S3-compatible storage:
$ nf ceph rgw deploy --node <node-1> $ nf ceph rgw user create --id myuser --name "My User"
The user’s access and secret keys will be returned. Use them with nf s3 or any S3 client.