nf ceph
The nf ceph command is the entrypoint for all Ceph cluster management. It provides subcommands for creating and configuring the core Ceph daemons that make up a storage cluster.
Use these commands after nodes have been registered and disks are visible via nf disks list.
Command reference
| Command | Flags | Description |
|---|---|---|
nf ceph fs | --name --metadataPool --dataPool | Create a CephFS filesystem |
nf ceph mon | --node --disk | Create a Ceph Monitor |
nf ceph osd | --node --disk | Create an OSD on a disk |
nf ceph mds | --node | Create a Metadata Server |
nf ceph mgr | --node | Create a Manager daemon |
nf ceph rgw list | List RGW instances | |
nf ceph rgw deploy | --node | Deploy an RGW instance |
nf ceph rgw remove | --id | Remove an RGW instance |
nf ceph rgw user list | List RGW users | |
nf ceph rgw user create | --id --name --email | Create an RGW user |
Subcommands
fs
Create a CephFS filesystem with metadata and data pools.
mon
Create a Monitor for cluster consensus and map management.
osd
Create an OSD to provide storage capacity on a disk.
mds
Create a Metadata Server for CephFS.
mgr
Create a Manager for metrics and orchestration.
rgw
Manage RADOS Gateway instances and S3 users.
Typical deployment order
A standard Ceph cluster deployment follows this sequence:
- Monitors — deploy 3 or 5 for quorum (
nf ceph mon) - Managers — at least 1 for cluster health (
nf ceph mgr) - OSDs — one per available disk (
nf ceph osd) - MDS — if you need CephFS (
nf ceph mds) - CephFS — create the filesystem (
nf ceph fs) - RGW — if you need S3 (
nf ceph rgw deploy)