Self-driving local storage for Linux

Self-driving storage for Linux.

ArgosFS is a Rust FUSE filesystem for heterogeneous local disks, erasure-coded redundancy, rootfs experiments, and auditable health automation.

FUSEreal mounted Linux filesystem frontend
4+2configurable Reed-Solomon data + parity
SMARTdisk-health signals for automation
rootfsinitramfs and systemd integration assets

Implemented features

A complete experimental filesystem surface.

ArgosFS exposes normal filesystem behavior while keeping the redundancy, placement, health, and recovery decisions inspectable from the CLI and retained artifacts.

01

Rootfs-capable FUSE frontend

Implements lookup, getattr, setattr, create, read, write, rename, xattrs, readdir, fsync, statfs, symlinks, hardlinks, devices, FIFOs, and sockets.

02

Erasure-coded storage

Configurable k+m Reed-Solomon stripes distribute data and parity shards across disk failure domains, with reconstruction on degraded reads.

03

Heterogeneous disk placement

Weighted, tier-aware rendezvous hashing uses capacity, tier, latency, NUMA locality, and health pressure instead of assuming identical disks.

04

Compression and encryption

Per-stripe compression supports zstd, lz4, or none. Authenticated encryption uses Argon2id-derived keys and XChaCha20-Poly1305.

05

Health autopilot

SMART refresh, latency EWMAs, capacity pressure, risk memory, cooldown gates, and dry-run explanations drive conservative maintenance plans.

06

Repair and self-heal

Scrub, fsck repair, orphan cleanup, read reconstruction, deferred self-heal, disk drain, add-disk, and weighted rebalance are available from the CLI.

07

Durable metadata

Copy-on-write JSON metadata, primary and secondary copies, hash-checked journal snapshots, replay, mismatch detection, and named snapshots make recovery auditable.

08

Advanced I/O and observability

Buffered I/O, direct-I/O fallback, io_uring fallback, mmap-backed read staging, RAM + L2 cache, and a Prometheus exporter expose the data path.

Technical principle

Safety-constrained storage automation.

ArgosFS does not let automation freely mutate the volume. Every action is proposed, bounded, checked against recoverability constraints, executed in small budgets, and verified before the next step.

Observe

Collect disk and workload state

Probe media class, capacity, backing devices, SMART fields, latency, throughput, cache behavior, and file heat.

Score

Rank safe actions

Compare observe, scrub, drain, self-heal, and rebalance candidates with explicit rejection reasons when a safety guard fails.

Mutate

Rewrite only within budgets

Move shards incrementally, preserve redundancy targets, respect capacity reservations, and throttle background work.

Verify

Prove the volume still mounts

Run journal validation, fsck checks, shard hash verification, and downgrade automation when post-action validation fails.

Data path

From POSIX operation to recoverable shards.

Writes enter through FUSE, pass permission and ACL checks, are chunked into stripes, optionally compressed and encrypted, encoded into data/parity shards, placed across weighted disks, and committed through a copy-on-write metadata transaction.

Open the technical walkthrough
01FUSE request
02ACL + inode checks
03Compression / encryption
04Reed-Solomon encoding
05Tier-aware placement
06COW metadata commit

Source available

Read the code, run the tests, inspect every planner decision.

Open GitHub Repository