Storage

This cluster uses Rook Ceph to provide distributed storage across all nodes. Each Intel NUC contributes its SSD to a shared Ceph cluster, which is then exposed to workloads via Kubernetes StorageClasses. An external Synology NAS provides additional NFS storage for large media files.

Ceph Cluster

The Ceph cluster is deployed via the rook-ceph Helm chart (v1.19.3) in the rook-ceph namespace. It uses all available nodes and devices automatically.

ComponentCountResources
Monitors (mon)3500m CPU, 512Mi memory
Managers (mgr)2500m CPU, 512Mi memory
OSDsauto (all devices)1000m CPU, 1024Mi memory

Key features enabled:

  • Dashboard — accessible via HTTP route for cluster monitoring
  • Monitoring — Prometheus metrics are exported
  • Disk prediction — the diskprediction_local module is enabled for drive health forecasting
  • PG autoscaler — automatically tunes placement group counts
  • CSI read affinity — reads are served from the closest OSD based on topology
  • Discovery daemon — automatically detects new devices

The Ceph cluster data directory is stored at /var/lib/rook on each node.

Block Pools

Block pools define how data is replicated across the Ceph cluster. Two pools exist with different durability guarantees.

block-pool (Non-Replicated)

PropertyValue
Failure domainhost
Replication size1 (no redundancy)
RBD mirroringenabled (image mode)
RBD statsenabled

Warning: This pool has no data redundancy. If a single OSD or node is lost, data in this pool is lost. Use only for data that is easily recreatable or non-critical.

safe-block-pool (Replicated)

PropertyValue
Failure domainhost
Replication size3 (full redundancy)
RBD mirroringenabled (image mode)
RBD statsenabled

This pool stores three copies of every block across different hosts, surviving up to two simultaneous host failures.

Object Store

object-store-replicated (S3-Compatible)

A Ceph Object Store providing S3-compatible storage with the following configuration:

ComponentFailure DomainStrategyDetails
Metadata poolhostReplicated3 copies
Data poolhostErasure coded2 data + 1 coding chunk
Gateway2 instances, port 80

Pool preservation is enabled (preservePoolsOnDelete: true), so data pools are retained even if the object store resource is deleted.

Storage Classes

Storage classes are the primary interface for workloads to request storage. The cluster defines three storage classes.

Block Storage Classes

All block storage classes use the rook-ceph.rbd.csi.ceph.com provisioner with ext4 filesystem, Retain reclaim policy, Immediate volume binding, and volume expansion enabled.

Storage ClassPoolReplicationDefault
ceph-blockblock-pool✅ Yes
ceph-block-replicatedsafe-block-poolNo

Object Storage Classes

Storage ClassObject StoreProvisioner
ceph-object-replicatedsafe-object-storerook-ceph.ceph.rook.io/bucket

Choosing a Storage Class

  • ceph-block — Use for non-critical, easily recreatable data where performance matters more than durability (1× replication).
  • ceph-block-replicated — Use for important application data that must survive node failures (3× replication).
  • ceph-object-replicated — Use for S3-compatible object/bucket storage with erasure coding.

NFS Storage

The Synology NAS provides NFS volumes for large media and download directories. NFS PVCs use the nfs storage class with ReadWriteMany access mode, allowing multiple pods to mount the same volume simultaneously. These are typically sized at 1Mi as nominal placeholders since the actual storage is managed by the NAS.

Volume Inventory

By Namespace

developer

PVC NameStorage ClassSizeAccess ModeApplication
opencode-dataceph-block-replicated50GiReadWriteOnceOpenCode

download

PVC NameStorage ClassSizeAccess ModeApplication
lidarr-configceph-block-replicated20GiReadWriteOnceLidarr
lidarr-downloadnfs1MiReadWriteManyLidarr
lidarr-medianfs1MiReadWriteManyLidarr
prowlarr-configceph-block-replicated20GiReadWriteOnceProwlarr
radarr-configceph-block-replicated20GiReadWriteOnceRadarr
radarr-downloadnfs1MiReadWriteManyRadarr
radarr-medianfs1MiReadWriteManyRadarr
sabnzbd-configceph-block-replicated20GiReadWriteOnceSABnzbd
sabnzbd-downloadnfs1MiReadWriteManySABnzbd
seerr-configceph-block-replicated10GiReadWriteOnceSeerr
sonarr-configceph-block-replicated20GiReadWriteOnceSonarr
sonarr-downloadnfs1MiReadWriteManySonarr
sonarr-medianfs1MiReadWriteManySonarr

home

PVC NameStorage ClassSizeAccess ModeApplication
n8n-configceph-block-replicated10GiReadWriteOncen8n

media

PVC NameStorage ClassSizeAccess ModeApplication
autoscan-medianfs1MiReadWriteManyAutoscan
ersatztv-configceph-block-replicated5GiReadWriteOnceErsatzTV
ersatztv-medianfs1MiReadWriteManyErsatzTV
plex-configceph-block-replicated300GiReadWriteOncePlex
plex-medianfs1MiReadWriteManyPlex
tautulli-configceph-block-replicated20GiReadWriteOnceTautulli

rook-ceph (via Helm)

ComponentStorage ClassSizeApplication
ClickHouse persistenceceph-block-replicated50GiSigNoz

Summary by Storage Class

Storage ClassConsumersTotal Ceph Storage
ceph-block-replicated12555Gi
nfs11N/A (NAS-managed)

Largest Consumers

ApplicationPVCSize
Plexplex-config300Gi
OpenCodeopencode-data50Gi
SigNozClickHouse persistence50Gi
Lidarrlidarr-config20Gi
Prowlarrprowlarr-config20Gi
Radarrradarr-config20Gi
SABnzbdsabnzbd-config20Gi
Sonarrsonarr-config20Gi
Tautullitautulli-config20Gi

Configuration Reference

All Rook Ceph resources are defined in:

cluster/core/rook-ceph/rook-ceph/
├── kustomization.yaml
├── values.yaml
└── resources/
    ├── ceph-block-pool.yaml
    ├── ceph-block-pool-replicated.yaml
    ├── ceph-cluster.yaml
    ├── ceph-object-store-replicated.yaml
    ├── storage-class-ceph-block.yaml
    ├── storage-class-ceph-block-replicated.yaml
    └── storage-class-ceph-object-replicated.yaml

Individual application PVCs are defined alongside their deployments in:

cluster/apps/<namespace>/<application>/resources/pvc-*.yaml