Rootfs-capable FUSE frontend
Implements lookup, getattr, setattr, create, read, write, rename, xattrs, readdir, fsync, statfs, symlinks, hardlinks, devices, FIFOs, and sockets.
Self-driving local storage for Linux
ArgosFS is a Rust FUSE filesystem for heterogeneous local disks, erasure-coded redundancy, rootfs experiments, and auditable health automation.
Implemented features
ArgosFS exposes normal filesystem behavior while keeping the redundancy, placement, health, and recovery decisions inspectable from the CLI and retained artifacts.
Implements lookup, getattr, setattr, create, read, write, rename, xattrs, readdir, fsync, statfs, symlinks, hardlinks, devices, FIFOs, and sockets.
Configurable k+m Reed-Solomon stripes distribute data and parity shards across disk failure domains, with reconstruction on degraded reads.
Weighted, tier-aware rendezvous hashing uses capacity, tier, latency, NUMA locality, and health pressure instead of assuming identical disks.
Per-stripe compression supports zstd, lz4, or none. Authenticated encryption uses Argon2id-derived keys and XChaCha20-Poly1305.
SMART refresh, latency EWMAs, capacity pressure, risk memory, cooldown gates, and dry-run explanations drive conservative maintenance plans.
Scrub, fsck repair, orphan cleanup, read reconstruction, deferred self-heal, disk drain, add-disk, and weighted rebalance are available from the CLI.
Copy-on-write JSON metadata, primary and secondary copies, hash-checked journal snapshots, replay, mismatch detection, and named snapshots make recovery auditable.
Buffered I/O, direct-I/O fallback, io_uring fallback, mmap-backed read staging, RAM + L2 cache, and a Prometheus exporter expose the data path.
Technical principle
ArgosFS does not let automation freely mutate the volume. Every action is proposed, bounded, checked against recoverability constraints, executed in small budgets, and verified before the next step.
Probe media class, capacity, backing devices, SMART fields, latency, throughput, cache behavior, and file heat.
Compare observe, scrub, drain, self-heal, and rebalance candidates with explicit rejection reasons when a safety guard fails.
Move shards incrementally, preserve redundancy targets, respect capacity reservations, and throttle background work.
Run journal validation, fsck checks, shard hash verification, and downgrade automation when post-action validation fails.
Data path
Writes enter through FUSE, pass permission and ACL checks, are chunked into stripes, optionally compressed and encrypted, encoded into data/parity shards, placed across weighted disks, and committed through a copy-on-write metadata transaction.
Open the technical walkthroughSource available