Quick Start
This guide walks through installing the NodeFoundry master node, registering your first storage servers, and deploying a 3-node Ceph cluster. Estimated time: 15 minutes.
1. Install the master node
Download the NodeFoundry installer and run it on the machine that will act as your control plane. This can be a physical server, VM, or cloud instance. Minimum requirements: 4 vCPU, 8 GB RAM, 50 GB disk.
$ curl -fsSL https://get.nodefoundry.io | bash
The installer sets up the NodeFoundry service, iPXE HTTP server, and built-in DHCP server. When complete, the dashboard is available at http://<master-ip>:3000.
2. Configure your network switch
Storage nodes need to PXE boot from the master. On your managed switch, set DHCP option 66 (next-server) or option 67 (boot filename) to point at the master node’s IP:
ip dhcp pool STORAGE network 10.10.1.0 255.255.255.0 default-router 10.10.1.1 next-server 10.10.1.10 # master node IP filename "ipxe/boot.ipxe"
If you’re using the master node’s built-in DHCP server instead of your switch’s, no configuration is needed — it handles this automatically.
3. Boot your storage nodes
Power on your storage servers. Each node will:
- Request a DHCP address from the master
- Pull the NodeFoundry bootstrap image over HTTP
- Boot into the bootstrap environment and inventory hardware
- Register with the control plane over the management network
Nodes appear in the dashboard within 30–60 seconds of power-on. You can also check from the CLI:
$ nf node listNAME STATUS DRIVES RAM IPMI storage-01 pending 8x NVMe 256 G 10.10.1.21 storage-02 pending 8x NVMe 256 G 10.10.1.22 storage-03 pending 8x NVMe 256 G 10.10.1.23
Note — Nodes in
pendingstatus have registered but not yet been assigned to a cluster. This is expected.
4. Deploy Ceph
Once your nodes are registered, create the Ceph cluster components. Start with monitors, then managers, then OSDs:
$ nf ceph mon --node <node-1-id> --disk <disk-id> $ nf ceph mon --node <node-2-id> --disk <disk-id> $ nf ceph mon --node <node-3-id> --disk <disk-id>
$ nf ceph mgr --node <node-1-id>
$ nf ceph osd --node <node-1-id> --disk <disk-id> $ nf ceph osd --node <node-2-id> --disk <disk-id> $ nf ceph osd --node <node-3-id> --disk <disk-id>
Use nf disks list to find available disk IDs on each node.
5. Verify health
Check that all tasks completed and the cluster is healthy:
$ nf tasks list a1b2c3d4 ceph.create-mon completed ... e5f6a7b8 ceph.create-mgr completed ... c9d0e1f2 ceph.create-osd completed ...
What’s next
- CLI Reference — Complete
nfcommand documentation - REST API — Automate cluster operations via the API
- Deploying Ceph — Detailed Ceph deployment guide
- S3 Object Storage — Set up RGW for S3-compatible storage