A fault-tolerant, in-memory vector database built from first principles. Raft consensus, HNSW indexing, WAL durability — REST & gRPC ready.
Query a distributed vector cluster with plain HTTP. No SDK required.
No off-the-shelf consensus library. No managed search engine. Every component written from scratch in Rust.
Full Raft implementation with leader election, log replication, and Pre-Vote extension to prevent term inflation during network partitions.
Hierarchical Navigable Small World graph in RAM. O(log N) queries, 95%+ recall at sub-ms latency. M=16 sweet spot for 128-dim vectors.
CRC32-verified WAL with crash recovery. Corrupt tail entries are detected and truncated automatically on restart — zero data loss.
Tokio's biased select! ensures heartbeats are never delayed by slow searches.
Cluster stability preserved under any load.
Native gRPC with streaming upserts for high-throughput clients. Axum HTTP layer for zero-friction access via curl or Postman.
6 isolated crates: common, engine, WAL, raft, transport, server. Clean boundaries enable independent testing of every subsystem.
Production-grade resilience isn't a claim — it's a test suite.
Leader dies mid-heartbeat. New leader elected in under 300ms via election timeout.
Leader isolated from majority. Pre-Vote prevents term inflation. Majority side elects new leader.
Network heals after partition. Higher-term leader wins. Logs reconcile to single consistent state.
Crash during write, partial entry. CRC32 detects corruption. Corrupt tail truncated and recovered automatically.
Follower falls behind leader. Leader sends missing entries via AppendEntries using
nextIndex[].
Clone, build, run. A 3-node Raft cluster with one script.